report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
As we reported earlier this year, mission-critical skills gaps within the federal workforce pose a high risk to the nation. Regardless of whether the shortfalls are in such government-wide occupations as cybersecurity and acquisitions, or in agency-specific occupations such as nurses at the Veterans Health Administration, skills gaps impede the federal government from cost-effectively serving the public and achieving results. Agencies can have skills gaps for different reasons: they may have an insufficient number of people or their people may not have the appropriate skills or abilities to accomplish mission-critical work. Moreover, current budget and long-term fiscal pressures, the changing nature of federal work, and a potential wave of employee retirements that could produce gaps in leadership and institutional knowledge, threaten to aggravate the problems created by existing skills gaps. According to our analysis of OPM data, government-wide more than 34 percent of federal employees on-board by the end of fiscal year 2015 will be eligible to retire by 2020 (see figure 1). Some agencies, such as the Department of Housing and Urban Development, will have particularly high eligibility levels by 2020. Various factors can affect when individuals actually retire, and some amount of retirement and other forms of attrition can be beneficial because it creates opportunities to bring fresh skills on board and it allows organizations to restructure themselves to better meet program goals and fiscal realities. But if turnover is not strategically monitored and managed, gaps can develop in an organization’s institutional knowledge and leadership. While numerous tools are available to help agencies address their talent needs, our past work has identified problems across a range of personnel systems and functions. For example: Classification system: The GS system has not kept pace with the government’s evolving requirements. Recruiting and hiring: Federal agencies need a hiring process that is applicant friendly, flexible, and meets policy requirements. Pay system: Employees are compensated through an outmoded system that (1) rewards length of service rather than individual performance and contributions, and (2) automatically provides across- the-board annual pay increases, even to poor performers. Performance management: Developing modern, credible, and effective employee performance management systems and dealing with poor performers have been long-standing challenges for federal agencies. Employee engagement: Additional analysis and sharing of promising practices could improve employee engagement and performance. As we reported in 2012, Congress’s policy calls for federal workers’ pay under the GS system to be aligned with comparable nonfederal workers’ pay. Across-the-board pay adjustments are to be based on private sector salary growth. Locality adjustments are designed to reduce the gap between federal and nonfederal pay in each locality to no more than 5 percent. The President’s Pay Agent is the entity charged with determining the disparities between federal and nonfederal pay in each locality; it measures federal pay based on OPM records that identify GS employees by occupation and grade level, and nonfederal pay based on U.S. Bureau of Labor Statistics data (BLS). In 2012, the Pay Agent has recommended that the underlying model and methodology for estimating pay gaps be reexamined to ensure that private sector and federal sector pay comparisons are as accurate as possible. As of December 2016, no such reexamination has taken place. The across-the-board and locality pay increases may be made every year, and are not linked to performance. Pay increases and monetary awards that are linked to performance ratings as determined by the agencies’ performance appraisal systems include within-grade increases, ratings-based cash awards, and quality step increases, and are available to GS employees. Within-grade increases are the least strongly linked to performance, ratings-based cash awards are more strongly linked to performance depending on the rating system the agency uses, and quality step increases are also more strongly linked to performance. The composition of the federal workforce has changed over the past 30 years, with the need for clerical and blue collar roles diminishing and professional, administrative, and technical roles increasing. As a result, today’s federal jobs require more advanced skills at higher grade levels than in years past. Additionally, we have found that federal jobs, on average, require more advanced skills and degrees than private sector jobs. This is because a higher proportion of federal jobs than nonfederal are in skilled occupations such as science, engineering, and program management, while a lower proportion of federal jobs than nonfederal are in occupations such as manufacturing, construction, and service work. The result is that the federal workforce is on average more highly educated than the private sector workforce. As we reported in 2014, a key federal human capital management challenge is how best to balance the size and composition of the federal workforce so that it is able to deliver the high quality services that taxpayers demand, within the budgetary realities of what the nation can afford. Recognizing that the federal government’s pay system does not align well with modern compensation principles (where pay decisions are based on the skills, knowledge, and performance of employees as well as the local labor market), Congress has provided various agencies with exemptions from the current system to give them more flexibility in setting pay. Thus, a long-standing federal human capital management question is how to update the entire federal compensation system to be more market based and performance oriented. This type of system is a critical component of a larger effort to improve organizational performance. Our 2005 work showed that implementing a more market-based and more performance-oriented pay system is both doable and desirable. However, we also found that it is not easy. For one thing, agencies should have effective performance management systems that link individual expectations to organizational results. Moreover, representatives of public, private, and nonprofit organizations, in discussing the successes and challenges they have experienced in designing and implementing their own results-oriented pay systems, told us at the time they had to shift from a culture where compensation is based on position and longevity to one that is performance-oriented, affordable and sustainable. As we have reported in the past, these organizations’ experiences with their own market-based and performance-oriented pay systems provide useful lessons learned that will be important to consider to the extent the federal government moves toward a more results-oriented pay system. Lessons learned identified in our 2005 report include the following: 1. Focus on a set of values and objectives to guide the pay system. Values represent an organization’s beliefs and boundaries, and objectives articulate the strategy to implement the system. 2. Examine the value of employees’ total compensation to remain competitive in the labor market. Organizations consider a mix of base pay plus other monetary incentives, benefits and deferred compensation, such as retirement pay, as part of a competitive compensation system. 3. Build in safeguards to enhance the transparency and ensure the fairness of pay decisions. Safeguards are the precondition to linking pay systems with employee knowledge, skills, and contributions to results. 4. Devolve decision-making on pay to appropriate levels. When devolving such decision making, overall core processes help ensure reasonable consistency in how the system is implemented. 5. Provide training on leadership, management, and interpersonal skills to facilitate effective communication. Such skills as setting expectations, linking individual performance to organizational results, and giving and receiving feedback need renewed emphasis to make such systems succeed. 6. Build consensus to gain ownership and acceptance for pay reforms. Employee and stakeholder involvement needs to be meaningful and not pro forma. 7. Monitor and refine the implementation of the pay system. While changes are usually inevitable, listening to employee views and using metrics helps identify and correct problems over time. Our prior work has found that across a range of human capital functions, while in some cases statutory changes may be needed to advance reforms, in many instances improvements are within the control of federal agencies. These improvements include such actions as improving the coordination of hiring specialists and hiring managers on developing recruitment strategies and up-to-date position descriptions in vacancy announcements. Indeed, Congress has already provided agencies with a number of tools and flexibilities to help them build and maintain a high- performing workforce. Going forward, it will be important for agencies to make effective use of those tools and for Congress to hold agencies accountable for doing so. Among other things, our work has shown that the tone starts at the top. Agency leaders and managers should set an example that human capital is important and is directly linked to performance—it is not a transactional function. As we noted in our 2017 high-risk update, agencies can drive improvements to their high risk areas—including strategic human capital management—through such steps as: Sustained leadership commitment, including developing long-term priorities and goals, and providing continuing oversight and accountability; Ensuring agencies have adequate capacity to address their personnel issues, including collaborating with other agencies and stakeholders as appropriate; Identifying root causes of problems and developing action plans to address them, including establishing goals and performance measures; Monitoring actions by, for example, tracking performance measures and progress against goals; and Demonstrating progress by showing issues are being effectively managed and root causes are being addressed. Our list of leading human capital management practices may be helpful as well. Covering such activities as strategic workforce planning, recruitment and hiring, workforce development, and employee engagement, among others, agencies can use this information to strengthen how they recruit, retain, and develop their employees and Congress can hold agencies accountable for using them. OPM has taken some important steps as well. For example, in December 2016, OPM finalized revisions to its strategic human capital management regulation that include the new Human Capital Framework. This framework is to be used in 2017 by agencies to plan, implement, evaluate, and improve human capital policies and programs. Our recent work on federal hiring, classification, addressing poor performance, and the capacity of federal human resource functions are illustrative of some of the areas in need of attention. To help ensure agencies have the talent they need to meet their missions, we have found that federal agencies should have a hiring process that is simultaneously applicant friendly, sufficiently flexible to enable agencies to meet their needs, and consistent with statutory requirements, such as hiring on the basis of merit. Key to achieving this is the hiring authority used to bring applicants onboard. Congress and the President have created a number of hiring authorities to expedite the hiring process or to achieve certain public policy goals, such as facilitating the entrance of certain groups into the civil service. As we reported in 2016, we found that of the 105 hiring authorities used in fiscal year 2014, agencies relied on 20 of those authorities for 91 percent of the 196,226 new appointments made that year. OPM officials said at the time they did not know if agencies relied on a small number of authorities because agencies are unfamiliar with other authorities, or if they have found other authorities to be less effective. Although OPM tracks such data as agency time-to-hire, we found this information was not used by OPM or agencies to analyze the effectiveness of hiring authorities. As a result, OPM and agencies did not know if authorities were meeting their intended purposes. By analyzing hiring authorities, OPM and agencies could identify opportunities to refine authorities, expand access to specific authorities found to be highly efficient and effective, and eliminate those found to be less effective. We recommended that OPM, working with agencies, strengthen hiring efforts by (1) analyzing the extent to which federal hiring authorities are meeting agencies’ needs, and (2) using this information to explore opportunities to refine, eliminate, or expand authorities as needed, among other recommendations. OPM concurred with our recommendations, and reported it had reviewed hiring authorities related to the entry-level Pathways Program and for hiring seasonal employees. The GS classification system is a mechanism for organizing federal white- collar work—notably for the purpose of determining pay—based on a position’s duties, responsibilities, and difficulty, among other things. A guiding principle of the GS classification system is that employees should earn equal pay for substantially equal work. We and others have found that the work of the federal government has become more highly skilled and specialized than the GS classification system was designed to address when it was created in 1949 when most of the federal workforce was engaged in clerical work. While there is no one right way to design a classification system, in 2014, we identified eight key attributes that are important for a modern, effective classification system. Collectively, these attributes provide a useful framework for considering refinements or reforms to the current system. These key attributes are described in table 1. We concluded in 2014 that the inherent tension between some of these attributes, and the values policymakers and stakeholders emphasize could have large implications for pay, the ability to recruit and retain mission critical employees, and other aspects of personnel management. This is one reason why—despite past proposals—changes to the current system have been few, as finding the optimal mix of attributes that is acceptable to all stakeholders is difficult. In 2014, we recommended that OPM (1) work with stakeholders to examine ways to modernize the classification system, (2) develop a strategy to track and prioritize occupations for review and updates, and (3) develop cost-effective methods to ensure agencies are classifying correctly. OPM partially concurred with the first and third recommendation but did not concur with the second recommendation. Instead, OPM officials said they already tracked and prioritized occupations for updates. However, they were unable to provide documentation of their actions. In April 2017, OPM officials said they meet regularly with the interagency classification policy forum to inform classification implementation and had reviewed and canceled 21 occupational series that were minimally used by agencies. In our 2015 report, we noted how federal agencies’ ability to address poor performance has been a long-standing issue. Employees and agency leaders share a perception that more needs to be done to address poor performance, as even a small number of poor performers can affect agencies’ capacity to meet their missions. More generally, without effective performance management, agencies risk losing (or failing to utilize) the skills of top talent. They also may miss the opportunity to observe and correct poor performance. Among other things, we found effective performance management helps agencies establish a clear “line of sight” between individual performance and organizational success and using core competencies helps to reinforce organizational objectives. Agencies should also make meaningful distinctions in employee performance levels. However, we found that 99 percent of permanent, non-senior executive service employees in 2013 received a rating at or above fully successful, with around 61 percent rated as “outstanding” or “exceeds fully successful.” Importantly, in 2015 we found that good supervisors are key to the success of any performance management system. Supervisors provide the day-to-day performance management activities that can help sustain and improve the performance of more talented staff and can help marginal performers to become better. As a result, agencies should promote people into supervisory positions because of their supervisory skills (in addition to their technical skills) and ensure that new supervisors receive sufficient training in performance management. Likewise, a cultural shift might be needed among agencies and employees to acknowledge that a rating of “fully successful” is already a high bar and should be valued and rewarded and that “outstanding” is a difficult level to achieve. Further, in 2015 we found that probationary periods for new employees provide supervisors with an opportunity to evaluate an individual’s performance to determine if an appointment to the civil service should become final. However, some Chief Human Capital Officers (CHCO) said supervisors often do not use this time to make performance-related decisions about an employee’s performance because they may not know that the probationary period is ending or they have not had time to observe performance in all critical areas. In our prior work, we recommended that OPM educate agencies on ways to notify supervisors that an individual’s probationary period is ending and that the supervisor needs to make a decision about the individual’s performance and also to determine whether there are occupations in which the probationary period should extend beyond 1-year to provide supervisors with sufficient time to assess an individual’s performance. OPM concurred with the first recommendation and partially concurred with the second. In January 2017, OPM issued guidance to agency about supervisors notification of a probationary period ending, but officials said OPM had not taken action on extending the probationary period. In 2014, we found that many agency CHCO said their offices did not have the capacity to lead strategic human capital management activities such as talent management, workforce planning, and promoting high performance and a results-oriented culture. Instead, these offices remained focused on transactional human resource activities like benefits and processing personnel actions. As a result, officials said agency decision makers often did not seek out and draw upon the expertise of human capital experts to inform their deliberations. Perhaps further reflecting the varying capabilities of agency human capital offices across government, some CHCOs at the time said that agency leaders did not fully understand the potential for strategic human capital management and had not elevated the role of the human capital office to better support an agency’s operations and mission. The human resources specialist occupation continues to be one of six government-wide, mission-critical skills gap areas identified by OPM. Our recent work on the Veterans Health Administration (VHA) demonstrates how capacity shortfalls in an agency’s personnel office can adversely impact an organization’s mission. Among other things, we found that the recruitment and retention challenges VHA is experiencing with its clinical workforce are due, in part to attrition among its human resource employees and unmet staffing targets within medical center personnel offices. We concluded that until VHA strengthens its human resource capacity, it will not be positioned to effectively support its mission to serve veterans’ healthcare needs. We made 12 recommendations to Veterans Affairs (VA) to improve the human resource capacity and oversight of human resource functions at its medical centers; develop a modern, credible employee performance management system; and establish clear accountability for efforts to improve employee engagement. VA concurred with nine recommendations and partially concurred with three recommendations to improve VHA’s performance management system. Under OPM’s leadership, several steps have been taken as part of a cross agency group focused on improving the capacity of human resource specialists. For example, OPM reported that it increased registration in its Human Resources University and validated career path guides for classification, recruitment and hiring policy, and employee relations. As part of our ongoing oversight of OPM’s and agencies’ efforts to close government-wide mission critical skill gaps, we will continue to assess the progress being made in improving the human capital infrastructure within agencies needed to better support agencies’ planning and programmatic functions. In conclusion, given the long-term fiscal challenges facing the nation and ongoing operational and accountability issues across government, agencies must identify options to meet their missions with fewer resources. The federal compensation system should allow the government to cost-effectively attract, motivate, and retain a high- performing, agile workforce necessary to meet those missions. At the same time, our work has shown that agencies already have a number of tools and flexibilities available to them that can significantly improve executive branch personnel management and do so sooner, rather than later. Going forward, it will be important to hold agencies accountable for fully leveraging those resources. Chairman Chaffetz, Ranking Member Cummings, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions you may have at this time. If you or your staff have any questions about this statement, please contact Robert Goldenkoff at (202) 512-2757 or e-mail at goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony include Chelsa Gurkin, Assistant Director; Dewi Djunaidy, Analyst-in-Charge; Ann Czapiewski; Karin Fangman; Krista Loose; Susan Sato; Cynthia Saunders; and Stewart W. Small. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
A careful consideration of federal pay is an essential part of fiscal stewardship and is necessary to support the recruitment and retention of a talented, agile, and high-performing federal workforce. High-performing organizations have found that the life-cycle of human capital management activities—including workforce planning, recruitment, on-boarding, compensation, engagement, succession planning, and retirement programs—need to be aligned for the cost-effective achievement of an organization's mission. However, despite some improvements, strategic human capital management—and more specifically, skills gaps in mission critical occupations—continues to be a GAO high-risk area. This testimony is based on a body of GAO work primarily issued between June 2012 and March 2017. It focuses on (1) lessons learned in creating a more market driven, results-oriented approach to federal pay, and (2) opportunities, in addition to pay and benefits, that OPM and agencies could use to be more competitive in the labor market and address skills gaps. GAO's prior work has shown that implementing a market-based and more performance-oriented federal pay system is both doable and desirable, and should be part of a broader strategy of change management and performance improvement initiatives. In 2005, GAO identified the following key themes that highlight the leadership and management strategies high-performing organizations collectively considered in designing and managing a pay system that is performance oriented, affordable, and sustainable. Specifically, they: 1. Focus on a set of values and objectives to guide the pay system. 2. Examine the value of employees' total compensation to remain competitive in the labor market. 3. Build in safeguards to enhance the transparency and ensure the fairness of pay decisions. 4. Devolve decision-making on pay to appropriate levels. 5. Provide clear and consistent communication so that employees at all levels can understand how compensation reforms are implemented. 6. Build consensus to gain ownership and acceptance for pay reforms. 7. Monitor and refine the implementation of the pay system. While the federal compensation system may need to be re-examined, Congress has already provided agencies with tools and flexibilities to build and maintain a high-performing workforce. They include, for example: Hiring process GAO reported in 2016 that the Office of Personnel Management (OPM) and selected agencies had not evaluated the effectiveness of hiring authorities. By evaluating them, of which over 100 were used in 2014, OPM and agencies could identify ways to expand access to those found to be more effective, and eliminate those found to be less effective. General Schedule (GS) classification system The federal government has become more highly skilled and specialized than the GS classification system was designed to address at its inception in 1949. OPM and stakeholders should examine ways to make the classification system consistent with attributes GAO identified of a modern, effective classification system, such as internal and external equity. Performance management Credible and effective performance management systems are a strategic tool to achieve organizational results. These systems should emphasize “a line a sight” between individual performance and organizational success, and use core competencies to reinforce organizational objectives, among other things. Human resources capacity The human resources specialist occupation is a mission critical skills gap area. Chief Human Capital Officers have reported that human resources specialists do not have the skills to lead strategic human capital management activities. Strengthening this capacity could help agencies better meet their missions. Over the years, GAO has made recommendations to agencies and OPM to improve their strategic human capital management efforts. OPM and agencies generally concurred. This testimony discusses actions taken to implement key recommendations to improve federal hiring and classification.
DHS Acquisition Management Directive 102-01 (MD 102) and an accompanying instruction manual establish the department’s policies and processes for managing major acquisition programs. While DHS has had an acquisition management policy in place since October 2004, the department issued the initial version of MD 102 in 2008.leaders in the department are responsible for acquisition management functions, including managing the resources needed to fund major programs. DHS’s Chief Acquisition Officer—currently the Under Secretary for Management (USM)—is responsible for the management and oversight of the department’s acquisition policies and procedures. The Acquisition Decision Authority is responsible for approving the movement of programs through the acquisition life cycle at key milestone events. The USM or Deputy Secretary serve as the decision authority for programs with life cycle cost estimates of $1 billion or greater, while the cognizant component acquisition executive may serve as the decision authority for a program with a lower cost estimate. The DHS Acquisition Review Board (ARB) supports the Acquisition Decision Authority by reviewing major acquisition programs for proper management, oversight, accountability, and alignment with the department’s strategic functions at the key acquisition milestones and other meetings as needed. The ARB is supported by the Office of Program Accountability and Risk Management (PARM), which reports to the USM and is responsible for DHS’s overall acquisition governance process. In March 2012, PARM issued its first Quarterly Program Accountability Report (QPAR), which provided an independent evaluation of major programs’ health and risks. Since that time, PARM has issued two additional QPARs, most recently in July 2013, and plans to issue a fourth by the end of September 2013. PARM also prepares the Comprehensive Acquisition Status Reports, which are to be submitted to the appropriations committees with the President’s budget proposal and updated quarterly. The Office of Program Analysis and Evaluation (PA&E), within the Office of the Chief Financial Officer, is responsible for advising the USM, among others, on resource allocation issues. PA&E also oversees the development of the Future Years Homeland Security Program (FYHSP). The FYHSP is DHS’s 5-year funding plan for programs approved by the Secretary that are to support the department’s strategic plan. DHS acquisition policy reflects many key program management practices intended to mitigate the risks of cost growth and schedule slips. However, we previously found that the department did not implement the policy consistently. Officials explained that DHS’s culture emphasized the need to rapidly execute missions more than sound acquisition management practices, and we found that senior leaders did not bring to bear the critical knowledge needed to accurately track program performance. Most notably, we found that most programs lacked approved acquisition program baselines, which are critical management tools that establish how systems will perform, when they will be delivered, and what they will cost. We also reported that most of the department’s major programs were at risk of cost growth and schedule slips as a result. In our past work examining DOD weapon acquisition issues and best practices for product development, we have found that leading commercial firms pursue an acquisition approach that is anchored in knowledge, whereby high levels of product knowledge are demonstrated by critical points in the acquisition process. While DOD’s major acquisitions have unique aspects, our large body of work in this area has established knowledge-based principles that can be applied to government agencies and can lead to more effective use of taxpayer dollars. A knowledge-based approach to capability development allows developers to be reasonably certain, at critical points in the acquisition life cycle, that their products are likely to meet established cost, schedule, and performance objectives. This knowledge provides them with information needed to make sound investment decisions. Over the past several years, our work has emphasized the importance of obtaining key knowledge at critical points in major system acquisitions and, based on this work, we have identified eight key practice areas for program management. These key practice areas are summarized in table 1, along with our assessment of DHS’s acquisition policy. As indicated in table 1, DHS acquisition policy establishes several key program-management practices through document requirements. MD 102 requires that major acquisition programs provide the ARB documents demonstrating the critical knowledge needed to support effective decision making before progressing through the acquisition life cycle. Figure 1 identifies acquisition documents that must be approved at the department level and their corresponding key practice areas. DHS acquisition policy has required these documents since November 2008, but in September 2012, we reported that the department generally had not implemented this policy as intended, and had not adhered to key program management practices. For example, we reported that DHS had only approved 4 of 66 major programs’ required documents in accordance with the policy. See figure 2. In September 2012, we reported that DHS leadership had, since 2008, formally reviewed 49 of the 71 major programs for which officials had responded to our survey. Of those 49 programs, DHS permitted 43 programs to proceed with acquisition activities without verifying the programs had developed the knowledge required under MD 102. Additionally, we reported that most of DHS’s major acquisition programs lacked approved acquisition program baselines, as required. These baselines are critical tools for managing acquisition programs, as they are the agreement between program-, component-, and department-level officials, establishing how systems will perform, when they will be Officials from half of the eight delivered, and what they will cost.components’ acquisition offices we spoke with, as well as PARM officials, noted that DHS’s culture had emphasized the need to rapidly execute missions more than sound acquisition management practices. PARM officials explained that, in certain instances, programs were not capable of documenting knowledge, while in others, PARM lacked the capacity to validate that the documented knowledge was adequate. As a result, we reported that senior leaders lacked the critical knowledge needed to accurately track program performance, and that most of the department’s major programs were at risk of cost growth and schedule slips. We also reported that DHS’s lack of reliable performance data not only hindered its internal acquisition management efforts, but also limited congressional oversight. We made five recommendations to the Secretary of Homeland Security at that time, identifying specific actions DHS should take to mitigate the risk of poor acquisition outcomes and strengthen the department’s investment management activities. DHS concurred with all five recommendations, and is taking steps to address them, most notably through policy updates. Since that time, we have continued to assess DHS’s acquisition management activities and the reliability of the department’s performance data. We currently have a review underway for this subcommittee assessing the extent to which DHS is executing effective executive oversight and governance (including the quality of the data used) of a major effort to modernize an information technology system, TECS. TECS is a major border enforcement system used for preventing terrorism, providing border security and law enforcement information about people who are inadmissible or may pose a threat to the security of the United States. We are (1) determining the status of the modernization effort, including what has been deployed and implemented to date, as well as the extent to which the modernization is meeting its cost and schedule commitments, including the quality of schedule estimates; and (2) assessing requirements management and risk management practices. We plan to issue our report in early November. According to DHS officials, its efforts to implement the department’s acquisition policy were complicated by the large number of programs initiated before the department was created, including 11 programs that PARM officials told us in 2012 had been fielded and were in the sustainment phase when MD 102 was signed.work, we found that, in May 2013, the USM waived the acquisition documentation requirements for 42 major acquisition programs that he identified as having been already fielded for operational use when MD 102 was issued in 2008. In a memo implementing the waiver, the USM explained that it would be cost prohibitive and inefficient to recreate documentation for previous acquisition phases. However, he stated that the programs will continue to be monitored, and that they must comply with MD 102 if any action is taken that materially impacts the scope of the current program, such as a major modernization or new acquisition. We plan to obtain more information on this decision and its effect on the department’s management of its major acquisitions. In September 2012, we reported that most of DHS’s major acquisition programs cost more than expected, took longer to deploy than planned, or delivered less capability than promised. We reported that these outcomes were largely the result of DHS’s lack of adherence to key knowledge-based program management practices. As part of our ongoing work, we analyzed a recent PARM assessment that suggests many of the department’s major acquisition programs are continuing to struggle. In its July 2013 quarterly program assessment, PARM reported that it had assessed 112 major acquisition programs. PARM reported that 37 percent of the programs experienced no cost variance at the end of fiscal year 2012, but it also reported that a large percentage of the programs were experiencing cost or schedule variances at that time. See table 2. However, as we reported in September 2012, DHS acquisition programs generally did not have the reliable cost estimates and realistic schedules needed to accurately assess program performance. We will continue to track DHS’s efforts to improve the quality of its program assessments moving forward. We have previously reported that cost growth and schedule slips at the individual program level complicated DHS’s efforts to manage its investment portfolio as a whole. When programs encountered setbacks, the department often redirected funding to troubled programs at the expense of others, which in turn were more likely to struggle. DHS’s Chief Financial Officer recently issued a memo stating that DHS faced a 30 percent gap between funding requirements for major acquisition programs and available resources. DHS has efforts underway to develop a more disciplined and strategic portfolio management approach, but the department has not yet developed key portfolio management policies and processes that could help the department address its affordability issues, and DHS’s primary portfolio management initiative may not be fully implemented for several years. In September 2012, we noted that DHS’s acquisition portfolio may not be affordable. That is, the department may have to pay more than expected for less capability than promised, and this could ultimately hinder DHS’s day-to-day operations.DHS’s Chief Financial Officer issued an internal memo in December 2012, shortly after our report was issued, stating that the aggregate 5- year funding requirements for major acquisitions would likely exceed available resources by approximately 30 percent. This acknowledgment was a positive step toward addressing the department’s challenges, in that it clearly identified the need to improve the affordability of the department’s major acquisition portfolio. Additionally, the Chief Financial Officer has required component senior financial officers to certify that they have reviewed and validated all current-, prior-, and future-year funding information presented in ARB materials, and ensure it is consistent with the FYHSP. Additionally, through our ongoing work, PA&E officials told us that the magnitude of the actual funding gap may be even greater than suggested because only a small portion of the cost estimates that informed the Chief Financial Officer’s analysis had been approved at the department level, and expected costs may increase as DHS improves the quality of the estimates. This is a concern we share. While holding components accountable is important, without validated and department- approved documents—such as acquisition program baselines and life cycle cost estimates—efforts to fully understand and address the department’s overall funding gap will be hindered. In September 2012, we reported that DHS largely made investment decisions on a program-by-program and component-by-component basis. DHS did not have a process to systematically prioritize its major investments to ensure that the department’s acquisition portfolio was consistent with anticipated resource constraints. In our work at DOD, we have found this approach hinders efforts to achieve a balanced mix of programs that are affordable and feasible and that provide the greatest return on investment. In our past work focused on improving weapon system acquisitions, we found that successful commercial companies use a disciplined and integrated approach to prioritize needs and allocate resources. As a result, they can avoid pursuing more projects than their resources can support, and better optimize the return on their investment. This approach, known as portfolio management, requires companies to view each of their investments as contributing to a collective whole, rather than as independent and unrelated. With an enterprise perspective, companies can effectively (1) identify and prioritize opportunities, and (2) allocate available resources to support the highest priority—or most promising—investment opportunities. Over the past several years, we have examined the practices that private and public sector entities use to achieve a balanced mix of new projects, and based on this work, we have identified key practice areas for portfolio management. One I would like to highlight today is that investments should be ranked and selected using a disciplined process to assess the costs, benefits, and risks of alternative products to ensure transparency and comparability across alternatives. In this regard, DHS established the Joint Requirements Council (JRC) in 2003, to identify crosscutting opportunities and common requirements among DHS components and help determine how DHS should use its resources. But the JRC stopped meeting in 2006 after the chair was assigned to other duties within the department. In 2008, we recommended that it be reinstated, or that DHS establish another joint requirements oversight board, and DHS officials recognized that strengthening the JRC was a top priority. Through our ongoing work, we have identified that DHS recently piloted a Capabilities and Requirements Council (CRC) to serve in a similar role as the JRC. The CRC began reviewing a portfolio of cyber capabilities in the summer of 2013. The pilot is intended to inform the department’s fiscal year 2015 budget request; therefore, it is too soon to assess the outcomes of this new oversight body. It is also unknown at this time how DHS will sustain the CRC over time or what its outcomes will be. In addition to private and public sector practices, which we discuss above, our prior work at DOD has identified an oversight body similar to the CRC’s expected function. The Joint Requirements Oversight Council (JROC) has a number of statutory responsibilities related to the identification, validation, and prioritization of joint military requirements. This body, which has been required by law since 1997, and its supporting organizations review requirements documents several times per year, prior to major defense acquisition programs’ key milestones. Through these reviews, proposed acquisition programs are scrutinized prior to their initiation and before decisions are made to begin production. The JROC also takes measures to help ensure the programs are affordable. In 2011, we reported that the JROC required the military services to show that their proposed programs were fully funded before it validated requirements for five of the seven proposed programs we reviewed. The two other proposed programs were funded at more than 97 and 99 percent, respectively. This full funding requirement is similar to the funding certification requirement DHS’s CFO established in December 2012. While some DOD acquisition programs continue to experience cost growth and schedule delays, as identified in our annual report on weapon systems acquisitions, the department does have in place mechanisms that DHS could adopt to improve the affordability of its acquisition portfolio, and put its acquisition programs in a better position to achieve successful outcomes. In September 2012, we reported that the CRC is one of several new councils and offices that DHS would establish as part of its Integrated Investment Life Cycle Model (IILCM), which is intended to improve portfolio management at DHS through the identification of priorities and capability gaps. This model, which the department proposed in January 2011, would provide a framework for information to flow between councils and offices responsible for strategic direction, requirements development, resource allocation, and program governance. DHS explained that the IILCM would ensure that mission needs drive investment decisions. While the IILCM, as envisioned, could improve DHS management decisions by better linking missions to acquisition outcomes, our ongoing work indicates that its full implementation may be several years away. From January 2011 to June 2012, the schedule for initiating IILCM operations slipped by a year, and in May 2013, a DHS official responsible for the IILCM told us he was unsure when the IILCM would be fully operational. We also found that some component acquisition officials are not aware of how the IILCM would apply to their own acquisition portfolios. Some of the officials we interviewed told us that DHS leadership needs to conduct more outreach and training about the IILCM and how it is expected to work, and a DHS headquarters official told us that the department is in the process of implementing an initial department-wide IILCM communications strategy. We will continue to assess the department’s progress in implementing what it views as a very important management model. Chairman Duncan, Ranking Member Barber, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact Michele Mackin at (202) 512-4841 or MackinM@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony include Katherine Trimble (Assistant Director), Nate Tranquilli, Steve Marchesani, Mara McMillen, and Sylvia Schatz. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO has highlighted DHS acquisition management issues on its high-risk list, and over the past several years, GAO's work has identified significant shortcomings in the department's ability to manage an expanding portfolio of major acquisitions. It is important for DHS to address these shortcomings because the department invests extensively in acquisition programs to help it execute its many critical missions. DHS is acquiring systems to help secure the border, increase marine safety, enhance cyber security, and execute a wide variety of other operations. In 2011, DHS reported to Congress that it planned to ultimately invest $167 billion in its major acquisition programs. In fiscal year 2013 alone, DHS reported it was investing more than $9.6 billion. This statement discusses (1) DHS's acquisition policy and how it has been implemented; and (2) DHS's mechanisms for managing emerging affordability issues. The statement is based on GAO's prior work on DHS acquisition management and leading commercial companies' knowledge-based approach to managing their large investments. It also reflects observations from ongoing work for this subcommittee. For that work, GAO is reviewing key documentation, and interviewing headquarters and component level acquisition and financial management officials. GAO has previously established that the Department of Homeland Security's (DHS) acquisition policy reflects many sound program management practices intended to mitigate the risks of cost growth and schedule slips. The policy largely reflects the knowledge-based approach used by leading commercial firms, which do not pursue major investments without demonstrating, at critical milestones, that their products are likely to meet cost, schedule, and performance objectives. DHS policy requires that important acquisition documents be in place and approved before programs are executed. For example, one key document is an acquisition program baseline, which outlines a program's expected cost, schedule, and the capabilities to be delivered to the end user. However, in September 2012, GAO found that the department did not implement the policy consistently, and that only 4 of 66 programs had all of the required documents approved in accordance with DHS's policy. GAO made five recommendations, which DHS concurred with, identifying actions DHS should take to mitigate the risk of poor acquisition outcomes and strengthen management activities. Further, GAO reported that the lack of reliable performance data hindered DHS and congressional oversight of the department's major programs. Officials explained that DHS's culture had emphasized the need to rapidly execute missions more than sound acquisition management practices. GAO also reported that most of the department's major programs cost more than expected, took longer to deploy than planned, or delivered less capability than promised. DHS has taken steps to improve acquisition management, but as part of its ongoing work, GAO found that DHS recently waived documentation requirements for 42 programs fielded for operational use since 2008. DHS explained it would be cost prohibitive and inefficient to recreate documentation for previous acquisition phases. GAO plans to obtain more information on this decision and its effect on the management of DHS's major acquisitions. DHS's July 2013 status assessment indicated that, as of the end of fiscal year 2012, many major programs still face cost and schedule shortfalls. DHS expects to provide another update in the near future. In December 2012, DHS's Chief Financial Officer reported that the department faced a 30 percent gap between expected funding requirements for major acquisition programs and available resources. DHS has efforts underway to develop a more disciplined and strategic approach to managing its portfolio of major investments, but the department has not yet developed certain policies and processes that could help address its affordability issues. In September 2012, GAO reported that DHS largely made investment decisions on a program-by-program and component-by-component basis and did not have a process to systematically prioritize its major investments. In GAO's work at the Department of Defense, it has found this approach hinders efforts to achieve a balanced mix of programs that are affordable and feasible and that provide the greatest return on investment. DHS's proposed Integrated Investment Life Cycle Model (IILCM) is intended to improve portfolio management by ensuring mission needs drive investment decisions. For example, a high-level oversight body would identify potential trade-offs among DHS's component agencies. GAO has recommended such an oversight body for several years. Full implementation of the IILCM may be several years away. GAO will continue to assess the department's progress in its ongoing work. GAO is not making any new recommendations in this statement. It has made numerous recommendations in its prior work to strengthen acquisition management, and DHS is taking steps to address them.
DOD began the F-35 acquisition program in October 2001 without adequate knowledge about the aircraft’s critical technologies or design. In addition, DOD’s acquisition strategy called for high levels of concurrency between development, testing, and production. In our prior work, we have identified the lack of knowledge and high levels of concurrency as major drivers in the significant cost and schedule growth as well as performance shortfalls that the program has experienced since 2001. The program has been restructured three times since it began: first in December 2003, again in March 2007, and most recently in March 2012. The most recent restructuring was initiated in early 2010 when the program’s unit cost estimates exceeded critical thresholds established by statute—a condition known as a Nunn-McCurdy breach. DOD subsequently certified to Congress in June 2010 that the program was essential to national security and needed to continue. DOD then began efforts to significantly restructure the program and establish a new acquisition program baseline. These restructuring efforts continued through 2011 and into 2012, during which time the department increased the program’s cost estimates, extended its testing and delivery schedules, and reduced near-term aircraft procurement quantities by deferring the procurement of 450 aircraft into the future—total procurement quantities did not change. Figure 1 shows how planned quantities in the near-term have steadily declined over time. The new F-35 acquisition program baseline was finalized in March 2012, and since that time costs have remained relatively stable. Table 1 shows the significant cost, quantity, and schedule changes from the initial program baseline and the relative stability since the new baseline was established. In March 2012, when the new acquisition program baseline was finalized, DOD had not yet identified new initial operating capability dates for the military services. The following year, DOD issued a memorandum stating that the Marine Corps and Air Force were planning to field initial operating capabilities in 2015 and 2016 respectively, and that the Navy planned to field its initial operating capability in 2018, which represented a delay of 5 to 6 years since the program’s initial baseline. DOD is currently conducting developmental flight testing to verify that the F-35 system’s design works as intended and can reliably provide the capabilities needed for the services to field their respective initial operational capabilities. The program’s flight testing is separated into two key areas referred to as mission systems and flight sciences. Mission systems testing is done to verify that the software and systems that provide warfighting capabilities function properly and meet requirements, while flight science testing is done to verify the aircraft’s basic flying capabilities. For the F-35 program, DOD is developing and fielding mission systems capabilities in software blocks: (1) Block 1, (2) Block 2A, (3) Block 2B, (4) Block 3i, and (5) Block 3F. Each subsequent block builds on the capabilities of the preceding blocks. Blocks 1 and 2A are essentially complete. The program is now focused on completing Block 2B testing to support Marine Corps initial operating capability, but some testing of specific Block 3i and Block 3F capabilities is also being conducted. Blocks 2B and 3i will provide initial warfighting capabilities while Block 3F is expected to provide the full suite of warfighting capabilities. Figure 2 identifies the sequence of software blocks and capabilities expected to be delivered in each. As developmental flight testing continues, DOD is concurrently purchasing and fielding aircraft. The F-35 airframe and engine are managed as one program, but they are manufactured in separate facilities by different contractors. The airframe is being manufactured by Lockheed Martin, the prime contractor, in Fort Worth, Texas, while the engine—which is designated the F135—is manufactured by Pratt & Whitney in Middletown, Connecticut. The engines are purchased by the government directly from Pratt & Whitney and delivered as government furnished equipment to Lockheed Martin for integration into the airframes during production. As a result, engine development and testing activities are managed by Pratt & Whitney and not Lockheed Martin. The F-35 program continued to experience development and testing discoveries over the past year, largely due to a structural failure on the F- 35B durability test aircraft, an engine failure, and more mission system test growth than expected. Together, these factors led to adjustments in the program’s test schedule. Test resources and some aircraft capabilities were reprioritized, and test points were deferred or eliminated. While these actions mitigated some of the schedule risk, ultimately the completion of key developmental test activities had to be delayed. Decisions were also made to restructure an early operational test event that likely would have reduced operational risk for the Marine Corps. The event will now be conducted over time and will not be completed as originally scheduled. Instability in the development program is likely to continue with more complex and demanding development testing still to go. As the program continues to discover problems in development and testing, it also faces a significant challenge to improve the reliability of the engine. Program data show that the reliability of the engine is very poor (less than half of where it should be) and has limited the program’s progress toward its overall reliability targets. The engine contractor, Pratt & Whitney, has identified a number of design changes that it believes will help improve engine reliability, but some of those changes have not yet been implemented. With complex and challenging developmental testing remaining and engine reliability challenges ahead, DOD still plans to increase procurement rates by nearly threefold over the next 5 years. This same highly concurrent strategy has already proven to have negatively impacted the program. According to program reports, $1.7 billion could be incurred in costs associated with retrofits to already delivered aircraft. This cost will likely increase, as more aircraft are purchased and delivered before development ends. A significant structural failure on the F-35B durability test aircraft, an engine failure, and a higher than expected amount of test point growth largely to address software rework, over the past year, delayed key test activities and forced unexpected adjustments to the program’s development schedule and test plans. Each of these three factors is discussed in more detail below. At around 9,000 hours of durability testing—about half of the 16,000 hours required—a major airframe segment, known as a bulkhead, on the F-35B durability test aircraft severed, and one other bulkhead was fractured as a result. Durability testing on the F-35B was halted for more than a year as program officials conducted a root cause analysis and Lockheed Martin worked to repair the durability test aircraft. The root cause analysis determined that the bulkhead severed because its design did not take into account appropriate factors in the manufacturing processes, resulting in a bulkhead that had less durability life than expected. According to officials, the fracture in the other bulkhead was caused by the added weight it had to bear after the first bulkhead severed. Lockheed Martin is currently redesigning the bulkheads to strengthen the aluminum and plans to incorporate the updated designs into the ninth low-rate initial production lot. A total of 50 aircraft will have to be modified using additional structural reinforcement techniques. According to program and contractor officials, because the incident occurred halfway through durability testing, retrofits will not be required until the aircraft reach about half of their expected service life, or about 10 years. According to officials, the total costs of related modifications are yet to be determined at this time. In June 2014, an F-35A engine caught fire during take-off. As a result, the entire F-35 fleet was grounded for nearly one month and then placed under flight restrictions for several additional months. A root cause analysis conducted by Pratt & Whitney determined that excessive heat caused by rubbing between engine fan components ultimately led to parts of the engine breaking free at a high rate of speed, resulting in a fire. The program could not execute any planned flight test points while the fleet was grounded. After flying resumed there were still hundreds of planned test points that could not be executed because the fleet was restricted from flying at the speeds and conducting the maneuvers necessary to execute those points. Despite these obstacles, the program was able to keep its test aircraft productive and accomplished some test points that had been planned to be done in the future. Follow-up inspections conducted by the contractor identified 22 engines with evidence of overheating. Officials have identified a short-term fix that they believe will allow the fleet to return to normal flight test operations. As of January 31, 2015, 18 of 22 engines had received the short-term fix and were cleared to return to normal flight operations. Pratt & Whitney has identified several potential long-term fixes but no final determination has been made. While the program’s test plan for 2014 reflected an allowance of 45 percent growth in mission system software test points for the year— largely to address software rework that might be needed—officials from the Office of the Secretary of Defense noted that the program experienced around 90 percent growth, or nearly twice the planned amount. As of January 2015, 56 percent of the Block 2B functionality had been verified by program officials, which was about 10 percent short of its goal. According to DOD and contractor officials, the higher than anticipated amount of rework was largely due to the fact that portions of the Block 2B software did not function as expected during flight testing. To address these deficiencies, changes were made to the software and this extended the Block 2B test schedule by approximately 3 months. As of January 2015, all of the updated software was in flight testing. DOD continued to address other key technical risks that we have highlighted in the past including the Helmet Mounted Display, Arresting Hook System, and the Autonomic Logistics Information System (ALIS). A new helmet design was developed and integrated that includes previously developed updates and addresses shortfalls in night vision capability. Test pilots we spoke with noted that while testing of the new helmet design has just begun, some improvements over the previous design are evident, but more testing is needed. A redesigned Arresting Hook System was also integrated on the aircraft. While sea trial testing of the redesigned system was slightly delayed, the testing took place in November 2014 and the system performed very well with a 100 percent arrestment rate. Lastly, program officials began testing a more capable version of ALIS in September 2014 and expect to begin testing a deployable version in February 2015. Although DOD plans to release the deployable version in time for Marine Corps initial operational capability, it faces tight timeframes. A portion of the system’s capabilities, Prognostics Health Management downlink, has been deferred to follow-on development. In response to the challenges faced in 2014, program officials reprioritized test resources and aircraft capabilities, deferred or eliminated test points, and ultimately delayed completion of some developmental test activities. Personnel and facilities that had been dedicated to developing and testing Block 3i and Block 3F—software blocks required by the Air Force and the Navy to field initial operational capabilities in 2016 and 2018 respectively—were reassigned to focus on delivering Block 2B to support the Marine Corps’ initial operational capability in 2015. In addition, program officials eliminated over 1,500 test points from the overall Block 2B developmental test plan, and deferred some Block 2B capabilities. According to program officials, they chose to delay some test points that had been scheduled to be accomplished in 2014, and accomplish other test points that they had scheduled to be done in the future. In addition, program officials, in conjunction with officials from the Director, Operational Test and Evaluation, restructured a Block 2B early operational test event that had been planned for 2015. The restructured event will now be conducted over time as resources allow but will not be completed as originally scheduled. While these changes allowed the program to accomplish nearly the same number of test points it had planned for the year, officials stated that not all of the specific test activities scheduled were completed. In the end, the completion of Block 2B developmental testing is 3 months behind schedule, Block 3i testing is about 3 months behind schedule, and Block 3F could be as much as 6 months behind schedule. The program has a long way to go to achieve its engine reliability goals. Reliability is a function of how well a system design performs over a specified period of time without failure, degradation, or need of repair. During system acquisition, reliability growth should occur over time through a process of testing, analyzing, and fixing deficiencies through design changes or manufacturing process improvements. Once fielded, there are limited opportunities to improve a system’s reliability without additional cost increases and schedule delays. Currently, the F-35 engine’s reliability is very poor and overall aircraft reliability growth has been limited. Improving engine reliability will likely require additional design changes and retrofits. The program uses various measures to track and improve reliability, including the mean flying hours between failures (design controlled). Data provided by Pratt & Whitney indicate that the mean flight hours between failure for the F-35A engine is about 21 percent of where the engine was expected to be at this point in the program. The F-35B engine is about 52 percent of where the engine was expected to be at this point. This means that the engine is failing at a much greater rate and requiring more maintenance than expected. Pratt & Whitney has identified a number of design changes that officials believe will improve the engine’s reliability and is in the process of incorporating some of those changes into the engine design, production, and retrofitted to already built aircraft; however, other design changes that Pratt & Whitney officials believe are needed, such as changes to engine hoses and sensors, are not currently funded. Figure 3 shows the trend in the engine’s mean flight hours between failures (design controlled). Poor engine reliability has limited the F-35’s overall reliability progress. The overall reliability of the aircraft, which includes engine reliability data, has been improving over the past year. Contractor officials attribute the improvements in reliability to having an increasing number of aircraft in flight operations that have received design changes to address previously identified problems. For example, design changes in the way certain metal plates—known as nut plates—are bonded to the aircraft, and changes to fix problems with contamination of the On-board Oxygen Generating System have been incorporated into 62 aircraft that were delivered and began flying throughout 2013 and 2014. While overall reliability has increased, engine reliability over the last year has remained well below expected levels. Improving the F-35 engine reliability to achieve established goals will likely require more time and resources than originally planned. In addition, in September 2014, we reported problems with the F-35 software reliability and maintainability. Specifically, we reported that the program continues to experience both hardware and software reliability issues, but DOD had no processes or metrics that provide sufficient insight into the impact of software reliability and maintainability contributing to the overall aircraft reliability. We recommended that DOD develop a software reliability and maintainability assessment process with metrics, and DOD concurred with this recommendation. While DOD has taken steps over the past few years to reduce concurrency, the program’s strategy still contains a noteworthy overlap between the completion of flight testing and the increase in aircraft procurement rates. With about 2 years and 40 percent of the developmental test program remaining and significant engine reliability growth needed, DOD plans to continue increasing procurement rates. Over the next 5 years, procurement will increase from 38 aircraft per year to 90 aircraft per year, and by the time developmental testing is finished— currently expected to occur in 2017—DOD expects to have purchased a cumulative total of 340 aircraft. During this time, there are plans to conduct testing to prove that the F-35 can provide full warfighting capabilities—Block 3F—needed to perform in more demanding and stressing environments. In addition, DOD plans to complete operational testing in early 2019 and at that time will have procured 518 aircraft or 21 percent of its total planned procurement quantities. At the same time, efforts will be ongoing to improve F-35 engine reliability. As of June 2014, DOD estimated that at that point about $1.7 billion in funding was needed to rework and retrofit aircraft with design changes needed as a result of test discoveries. This concurrency cost estimate does not include any costs related to the most recent failures. According to DOD officials, the estimate takes into account some unexpected costs and they believe that the estimate will not exceed $1.7 billion. However, with more complex and demanding testing ahead and engine reliability improvements needed, it is almost certain that the program will encounter more discoveries. Depending on the nature and significance of the discoveries, the program may need additional time and money, beyond the current $1.7 billion estimate, to incorporate design changes and retrofit aircraft at the same time that it increases procurement. As of December 2014, the program office estimated that the total acquisition cost of the F-35 will be $391.1 billion, or $7.4 billion less than DOD reported in December 2013. Our analysis indicates that the program will require an average of $12.4 billion per year, which represents around one-quarter of DOD’s annual funding for major defense acquisition programs over the next 5 years. From fiscal years 2015 to 2019, DOD plans to increase annual development and procurement funding for the F- 35 from around $8 billion to around $12 billion, an investment of more than $54 billion over that 5-year period, while competing with other large programs for limited acquisition resources. This funding reflects the U. S. military services’ plans to significantly increase annual aircraft procurement buys from 38 in 2015 to 90 in 2019. International partners will also increase procurement buys during this time, and the combined purchases will peak at 179 aircraft in 2021, with the United States purchasing 100 aircraft and the international partners purchasing an additional 79 aircraft. DOD projects that the program’s acquisition funding needs will increase to around $14 billion in 2022. Funding needs will remain between $14 and $15 billion for nearly a decade and peak at $15.1 billion in 2029 (see figure 4). Given resource limitations and the funding needs of other major acquisition programs such as the KC-46A tanker, the DDG-51 Class Destroyer, the Ohio Class submarine replacement, and a long-range strike bomber, in addition to the high estimated costs of sustaining the fleet over the next several years, we believe funding of this magnitude will pose significant affordability challenges. Since the 2012 re-baselining, DOD has made changes to its F-35 procurement plans on an annual basis. In 2013, DOD reduced the number of aircraft that it planned to purchase between 2015 and 2019 by 37 aircraft and extended the procurement timeline by one year. In 2014, DOD deferred the purchase of 4 more aircraft over that same timeframe. DOD officials attribute this decision to affordability concerns due to budget constraints, among other factors. Although this action may reduce near-term funding requirements as well as concurrency risks, it will likely increase the average unit cost of the aircraft purchased over that time and may increase funding liability in the future. DOD policy requires affordability analyses to inform long-term investment decisions. The consistent changes in F-35 procurement plans, made during the annual DOD budget process, indicate that the analysis done to support the program’s 2012 baseline did not accurately account for future technical risks or funding realities. Changes in procurement plans are also impacted by adjustments to military service and DOD priorities. Program office data indicates that after accounting for quantity changes, the program is unlikely to achieve the affordability unit cost targets set by the Under Secretary of Defense for Acquisition, Technology, and Logistics in 2012. The aircraft deferrals will also reduce the number of F- 35s fielded over the next several years, which could force the military services to invest in extending the life of their current aircraft fighter fleets, including the Air Force A-10 Thunderbolt II and the Navy F/A-18 Hornet. We believe maintaining the level of sustained funding required to build an F-35 fleet in addition to incurring costs to extend the life of current aircraft will be difficult in a period of austere defense budgets. Officials from the Office of Secretary of Defense have stated that the current sustainment strategy is not affordable. Both the program office and the Cost Assessment and Program Evaluation (CAPE) office, within the Office of the Secretary of Defense, estimate sustainment costs will be about $1 trillion over the life of the F-35 fleet. Since 2012, CAPE’s sustainment cost estimate has decreased by nearly $100 billion. CAPE attributes the bulk of this decrease to updated cost estimating ground rules and assumptions related to the cost of spare parts, labor rates, and fuel efficiency. The program office has also issued a separate sustainment cost estimate of approximately $859 billion, which is $57.8 billion less than it estimated last year. The CAPE and program estimates differ primarily in assumptions about reliability, depot maintenance, personnel, and fuel consumption. However, as we reported in September 2014, the current estimates are still higher than the current operation and support costs of the existing aircraft the F-35 is expected to replace, and according to officials from the Office of Secretary of Defense, remain unaffordable. In addition, we reported that DOD’s sustainment cost estimates may not reflect the most likely costs that the F-35 program will incur. While the F-35 program office and contractors have initiatives underway to improve affordability, those initiatives have a specific focus on reducing procurement and sustainment costs but do not assess the affordability of the program’s overall procurement plan within budget constraints. These initiatives include the “War on Cost,” “Cost War Room,” and “Blueprint for Affordability” that are intended to identify ways to reduce procurement and sustainment costs of the aircraft and engine. The initiatives are still ongoing and the total cost savings related to these initiatives is yet to be determined. As Lockheed Martin continues to deliver more aircraft, the number of hours needed to build each aircraft has declined and efficiency rates have improved despite increases in the time spent on scrap, rework, and repairs. Supplier performance has been mixed as late deliveries have resulted in increases to part shortages. Supplier quality defects have also increased while scrap, rework, and repair attributable to suppliers have remained steady. Pratt & Whitney is experiencing problems with quality and late deliveries from its suppliers. The number of aircraft produced in the Lockheed Martin’s final assembly facility has remained relatively stable over the last 3 years. The contractor has delivered a total of 110 aircraft since 2011—9 in 2011, 30 in 2012, 35 in 2013, and 36 in 2014. None of the aircraft delivered currently possess initial warfighting capability and are being used primarily for training purposes. As a result, delivered aircraft will have to be retrofitted with 2B initial warfighting capabilities prior to becoming operational. Although aircraft continued to be delivered later than contracted delivery dates— averaging 3.6 months late in 2014—Lockheed Martin officials believe they are closing the gap and expect to begin delivering to contract dates in 2015. Figure 5 shows actual aircraft deliveries compared to contracted delivery dates over the last 2 years. In 2014, Lockheed Martin achieved its goal of delivering 36 aircraft despite multiple setbacks throughout the year, such as late software deliveries, a fleet grounding, and increased engine inspections. As Lockheed Martin produces more aircraft and learns more about its manufacturing processes, it continues to reduce the number of labor hours needed to manufacture aircraft. The reduction in labor hours remained relatively steady over the last year; however, in the case of the F-35B, labor hours briefly trended upward. Officials stated that a gap in production of the F-35B variant between lots four and six and part shortages drove the increased labor hours. The number of labor hours to produce the last F-35B delivered in 2014 was lower than previous aircraft, and officials believe labor hours will continue to decrease as production quantities for the F-35B increase. Figure 6 identifies the trend in reduction of labor hours per aircraft since the beginning of low-rate initial production. The number of major engineering design changes has also continued to decline over time, and is currently tracking to the program’s plan. The reduction in labor hours and engineering design changes over time has allowed Lockheed Martin to increase manufacturing efficiency rates as measured by the hours it takes to complete certain production tasks compared to the number of hours established by engineering standards. The efficiency rate increased from about 16 to about 20 percent over the last year, nearly achieving Lockheed Martin’s goal of about 22 percent. Labor hours and efficiency rates improved despite increases in time spent on scrap, rework, and repair over the last year. The time spent on scrap, rework, and repair increased from 13.8 percent in production lot four to 14.9 percent in production lot five, falling short of Lockheed Martin’s goal of 12.8 percent. At 14.9 percent, Lockheed Martin’s scrap, rework, and repair rates are nearly equal to percentages experienced in the third production lot. According to Lockheed Martin officials, a majority of the scrap, rework, and repair hours are associated with fixing mislocated brackets and mismatched seams. Figure 7 shows the trend in percent of labor hours spent on scrap, rework, and repair along with the goal for the fifth production lot. If these trends continue, Lockheed Martin could have difficulty improving its manufacturing efficiency at its expected rates. Lockheed Martin reports that less than 40 percent of its critical manufacturing processes are considered in statistical control which means that for those processes it can consistently produce parts within quality tolerances and standards. Statistical control is a measure of manufacturing maturity. The best practice standard is to have 100 percent of the critical manufacturing processes in control by the start of low-rate initial production, which began in 2011 for the F-35 program. According to Lockheed Martin officials, only 54 percent of its F-35 critical manufacturing processes will provide enough data to measure statistical control. As a result, they do not expect to achieve 100 percent. Suppliers continue to deliver late parts to Lockheed Martin resulting in part shortages. Since 2013, the average number of part shortage occurrences at Lockheed Martin’s facility has increased. The severity of part shortages is measured in five categories, category 1 being the least severe, and category 4 and above being the most severe with those shortages requiring a major workaround or work stoppage. Figure 8 identifies changes in the average number of part shortage occurrences at Lockheed Martin’s facility over the last year. According to Lockheed Martin officials, suppliers are delivering late parts for several reasons. Those reasons include the need for suppliers to fix faulty parts and the inability of some suppliers to handle large amounts of throughput. In addition, Lockheed Martin officials stated that some delays are a result of late request for proposals, late responses to those proposals, and delayed contract negotiations with the government that result in late contract awards. Officials do not expect to begin making authorizations with appropriate lead time until the end of low-rate initial production in 2019. Part shortages will likely remain problematic and could be amplified as production rates increase over the next 5 years. Supplier quality at Lockheed Martin has been mixed. Lockheed Martin uses a reactive approach to managing most of its supplier base because, according to officials, they do not have access to supplier specific manufacturing data. The company uses internal metrics to track supplier performance such as the number of quality defects—known as non- conformances—discovered at the Lockheed Martin facility and the amount of scrap, rework, and repair driven by poor supplier performance. Over the last year, the number of supplier related non-conformances has slightly increased and Lockheed Martin continues to experience supplier related non-conformances for things like hole-drilling and bracket placement, among others. Lockheed Martin officials identified 22 of their more than 1,500 suppliers that contributed to 75 percent of the non- conformances. In order to address these non-conformances, Lockheed Martin developed a management team aimed at improving quality management at those suppliers. Lockheed Martin reported a 58 percent improvement in non-conformances for those suppliers over the last year, which it attributes to the work of its team. For example, the key supplier of the aircraft weapons bay door experienced quality problems in 2013. Over the last year, Lockheed Martin sent more personnel to the supplier’s facility to work with the supplier’s management team to identify problems in their production processes and solutions to those problems. As a result, the supplier adjusted its tooling and modified its quality management techniques, and the number of defects and part shortages from that supplier decreased. In addition, the percent of time spent on scrap, rework, and repair for supplied parts has remained steady. Supplier deliveries requiring scrap, rework, and repair averaged 1.3 percent of the hours spent building an aircraft over the last 2 years. Pratt & Whitney is also experiencing challenges with part shortages and supplier quality. Nearly 45 percent of Pratt & Whitney’s key suppliers have delivered late parts over the past year. In order to mitigate some of the risk, Pratt & Whitney has pulled or borrowed some parts meant for spare engines to use in place of the late parts. According to Pratt & Whitney officials, they have taken steps to reduce the number of parts that are borrowed; however, the number of borrowed parts remains high, which could lead to further part shortages and late engine deliveries if production rates increase over the next several years as planned. In addition, in 2014, poor supplier quality negatively impacted engine performance. For example, improper lubrication of an oil valve adapter by the valve supplier resulted in an in-flight emergency in June 2014. As a result, the oil valves on 136 F-35 aircraft—28 of which were in production and 108 of which were delivered—had to be removed and replaced. According to Pratt & Whitney officials, the associated retrofit costs were borne by the valve supplier. The supplier also made changes to its procedures to prevent future incidents. The F-35 remains DOD’s most costly and ambitious programs and one of its highest priority acquisition programs. The program began with an acquisition strategy that called for high levels of concurrency between developmental testing and aircraft procurement. Since then, however, the program has experienced significant technical problems that have resulted in schedule delays and additional unplanned, or latent, concurrency. With more than 100 production aircraft delivered as of December 2014, the program continues to encounter significant technical problems like the engine and bulkhead failures that require design changes. Programs in developmental testing are expected to encounter technical problems that require design changes. However, in a concurrent acquisition environment, the destabilizing effects of design changes are amplified as more systems are produced and delivered, thus requiring costly retrofits, and rework. With around 40 percent of developmental testing remaining, additional unanticipated changes are likely as much of that testing will be very challenging. At the same time, DOD plans to steeply increase its procurement funding requests over the next 5 years and projects that it will need between $14 and $15 billion annually for nearly a decade. It is unlikely that the program will be able to receive and sustain such a high and unprecedented level of funding over this extended period, especially with other significant fiscal demands weighing on the nation. This poses significant affordability challenges to DOD as other costly, high priority acquisition efforts, including the KC-46A Tanker and the DDG 51 Class Destroyer, compete for limited resources at the same time. DOD continues to adjust its F-35 procurement plans on an annual basis. This reactionary approach indicates that DOD may not be accurately accounting for the future technical and funding uncertainty it faces, and thus may not fully understand the affordability implications of increasing F-35 procurement funding at the planned rates. As DOD plans to significantly increase F-35 procurement funding over the next 5 years, we recommend that the Secretary of Defense conduct an affordability analysis of the program’s current procurement plan that reflects various assumptions about future technical progress and funding availability. We provided a draft of this report to DOD for comment. In its written comments, which are reprinted in appendix III, DOD concurred with our recommendation. DOD also provided technical comments which we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; the Commandant of the Marine Corps; and the Director of the Office of Management and Budget. The report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members making key contributions to this report are listed in Appendix IV. Start of system development and demonstration approved. Primary GAO Conclusion/Recommendation DOD response and actions Critical technologies needed for key aircraft performance elements are not mature. Program should delay start of system development until critical technologies are mature to acceptable levels. DOD did not delay start of system development and demonstration, stating technologies were at acceptable maturity levels and stated it would manage risks in development. The program underwent a re-plan to address higher than expected design weight, which added $7 billion and 18 months to development schedule. We recommended that the DOD reduce risks and establish an executable business case that is knowledge-based with an evolutionary acquisition strategy. DOD partially concurred but did not adjust strategy, believing that its approach was balanced between cost, schedule, and technical risk. Program set in motion a plan to enter production in 2007 shortly after first flight of the non- production representative aircraft. The program was entering production with less than 1 percent of testing complete. We recommended that DOD delay investing in production until flight testing shows that JSF performs as expected. DOD partially concurred but did not delay start of production because it believed the risk level was appropriate. Congress reduced funding for first two low- rate production buys thereby slowing the ramp up of production. Progress was being made but concerns remained about undue overlap in testing and production. We recommended limits to annual production quantities to 24 a year until flying quantities were demonstrated. DOD non-concurred and felt that the program had an acceptable level of concurrency and an appropriate acquisition strategy. DOD implemented a Mid- Course Risk Reduction Plan to replenish management reserves from about $400 million to about $1 billion by reducing test resources. We found the new plan increased risks and recommended that DOD should revise the plan to address concerns about testing, management reserves, and manufacturing. We determined that the cost estimate was not reliable and that a new cost estimate and schedule risk assessment was needed. DOD did not revise risk plan or restore testing resources, stating that it will monitor the new plan and adjust it if necessary. Consistent with one of our recommendations, a new cost estimate was prepared, but DOD did not conduct a risk and uncertainty analysis. The program increased the cost estimate and added a year to development but accelerated the production ramp up. Independent DOD cost estimate (JET I) projects even higher costs and further delays. Primary GAO Conclusion/Recommendation DOD response and actions Moving forward with an accelerated procurement plan and use of cost reimbursement contracts is very risky. We recommended the program report on the risks and mitigation strategy for this approach. DOD agreed to report its contracting strategy and plans to Congress and conduct a schedule risk analysis. The program reported completing the first schedule risk assessment with plans to update semi- annually. The Department announced a major program restructure, reducing procurement and moving to fixed-price contracts. The program was restructured to reflect findings of recent independent cost team (JET II) and independent manufacturing review team. As a result, development funds increased, test aircraft were added, the schedule was extended, and the early production rate decreased. Costs and schedule delays inhibited the program’s ability to meet needs on time. We recommended the program complete a full comprehensive cost estimate and assess warfighter and initial operating capability requirements. We suggested that Congress require DOD to tie annual procurement requests to demonstrated progress. DOD continued restructuring, increasing test resources and lowering the production rate. Independent review teams evaluated aircraft and engine manufacturing processes. Cost increases later resulted in a Nunn-McCurdy breach. Military services are currently reviewing capability requirements as we recommended. Restructuring continued with additional development cost increases; schedule growth; further reduction in near-term procurement quantities; and decreased the rate for future production. The Secretary of Defense placed the Short-takeoff Vertical Landing (STOVL) variant on a two-year probation; decoupled STOVL from the other variants; and reduced STOVL production plans for fiscal years 2011 to 2013. The restructuring actions were positive and if implemented properly should lead to more achievable and predictable outcomes. Concurrency of development, test, and production was substantial and provided risk to the program. We recommended the DOD maintain funding levels as budgeted; establish criteria for STOVL probation; and conduct an independent review of software development, integration, and test processes. DOD concurred with all three of the recommendations. DOD lifted STOVL probation citing improved performance. Subsequently, DOD further reduced procurement quantities, decreasing funding requirements through 2016. The initial independent software assessment began and ongoing reviews were planned to continue through 2012. The program established a new acquisition program baseline and approved the continuation of system development, increasing costs for development and procurements and extending the period of planned procurements by 2 years. Primary GAO Conclusion/Recommendation DOD response and actions Extensive restructuring placed the program on a more achievable course. Most of the program’s instability continued to be concurrency of development, test, and production. We recommended the Cost Assessment Program Evaluation office conduct an analysis on the impact of lower annual funding levels; and the program office conduct an assessment of the supply chain and transportation network. DOD partially concurred with conducting an analysis on the impact of lower annual funding levels and concurred with assessing the supply chain and transportation network. The program continued to move forward following a new acquisition program baseline in 2012. In doing so, the program incorporated positive and more realistic restructuring actions taken since 2010 including more time and funding for development and deferred procurement of more than 400 aircraft to future years. The program was moving in the right direction but must fully validate design and operational performance and at the same time make the system affordable. We did not make recommendations to DOD in this report. DOD agreed with GAO’s observations. The services established initial operational capabilities dates in 2013. The Marine Corps and Air Force are planning to field initial operational capabilities in 2015 and 2016, respectively, and the Navy plans to field its initial capability in 2018. DOD concurred with our recommendation and is in the process of conducting the assessment. The Department of Defense (DOD) currently has or is developing several plans and analyses that will make up its overall F-35 sustainment strategy, which is expected to be complete in fiscal year 2019. Primary GAO Conclusion/Recommendation DOD response and actions The annual F-35 operating and support costs are estimated to be considerably higher than the combined annual costs of several legacy aircraft. DOD had not fully addressed several issues that have an effect on affordability and operational readiness. Operating and support cost estimates may not be reliable. We recommended that DOD develop better informed affordability constraints; address three risks that could affect sustainment, affordability, and operational readiness; and take steps to improve the reliability of its cost estimates. DOD concurred with all but one recommendation and partially concurred with the recommendation to conduct uncertainty analysis on one of its cost estimates, stating it already conducts a form of uncertainty analysis. GAO continues to believe that the recommended analysis would provide a more comprehensive sense of the uncertainty in the estimates. To assess the program’s ongoing development and testing we reviewed the status of software development and integration and contractor management improvement initiatives. We also interviewed officials from the program office, Lockheed Martin, Pratt & Whitney, and the Defense Contract Management Agency (DCMA) to discuss current development status and software releases. In addition, we compared management objectives to progress made on these objectives during the year. We obtained and analyzed data on flights and test points, both planned and accomplished during 2014. We compared test progress against the total program plans to complete. In addition, we interviewed officials from the F-35 program office, Lockheed Martin, Pratt & Whitney, and Director, Operational Test and Evaluation office to discuss development test plans and achievements. We also collected information from the program office, prime contractor, engine contractor, and Department of Defense test pilots regarding the program’s technical risks including the helmet mounted display, autonomic logistics information system, carrier arresting hook, structural durability, and engine. We analyzed reliability data and discussed these issues with program and contractor officials. To assess the program’s cost and affordability, we reviewed financial management reports and monthly status reports available as of December 2014. In addition, we reviewed total program funding requirements from the December 2014 Selected Acquisition Report. We used these data to project annual funding requirements through the expected end of the F-35 acquisition in 2038. We also compared the December 2014 Selected Acquisition Report data to prior Selected Acquisition Reports to identify changes in cost and quantity. We obtained life-cycle operating and support cost through the program’s Selected Acquisition Report and projections made by the Cost Analysis and Program Evaluation (CAPE) office. We discussed future plans of the Department of Defense (DOD) and contractors to try and reduce life-cycle sustainment costs with officials from the program office, Lockheed Martin, and Pratt & Whitney. To assess manufacturing and supply chain performance we obtained and analyzed data related to aircraft delivery rates and work performance data through the end of calendar year 2014. This data was compared to program objectives identified in these areas and used to identify trends. We reviewed data and briefings provided by the program office, Lockheed Martin, Pratt & Whitney, and DCMA in order to identify issues in manufacturing processes. We discussed reasons for delivery delays and plans for improvement with Lockheed Martin and Pratt & Whitney. We also toured Pratt and Whitney’s manufacturing facility in Middletown, Connecticut and collected and analyzed data related to aircraft quality through December 2014. We collected and analyzed supply chain performance data and discussed plans taken to improve quality and deliveries with Lockheed Martin and Pratt & Whitney. We assessed the reliability of DOD and contractor data by reviewing existing information about the data, and interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. We conducted this performance audit from July 2014 to April 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition the contact name above, the following staff members made key contributions to this report: Travis Masters, Assistant Director; Peter Anderson; James Bennett; Marvin Bonner; Kristine Hassinger; Megan Porter; Marie Suding; and Abby Volk. F-35 Sustainment: Need for Affordable Strategy, Greater Attention to Risks, and Improved Cost Estimates. GAO-14-778. Washington, D.C.: September 23, 2014. F-35 Joint Strike Fighter: Slower Than Expected Progress in Software Testing May Limit Initial Warfighting Capabilities. GAO-14-468T Washington, D.C.: March 26 2014. F-35 Joint Strike Fighter: Problems Completing Software Testing May Hinder Delivery of Expected Warfighting Capabilities. GAO-14-322. Washington, D.C.: March 24, 2014. F-35 Joint Strike Fighter: Restructuring Has Improved the Program, but Affordability Challenges and Other Risks Remain. GAO-13-690T. Washington, D.C.: June 19, 2013. F-35 Joint Strike Fighter: Program Has Improved in Some Areas, but Affordability Challenges and Other Risks Remain. GAO-13-500T. Washington, D.C.: April 17, 2013. F-35 Joint Strike Fighter: Current Outlook Is Improved, but Long-Term Affordability Is a Major Concern. GAO-13-309. Washington, D.C.: March 11, 2013. Fighter Aircraft: Better Cost Estimates Needed for Extending the Service Life of Selected F-16s and F/A-18s. GAO-13-51. Washington, D.C.: November 15, 2012 Joint Strike Fighter: DOD Actions Needed to Further Enhance Restructuring and Address Affordability Risks. GAO-12-437. Washington, D.C.: June 14, 2012. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-12-400SP. Washington, D.C.: March 29, 2012. Joint Strike Fighter: Restructuring Added Resources and Reduced Risk, but Concurrency Is Still a Major Concern. GAO-12-525T. Washington, D.C.: March 20, 2012. Joint Strike Fighter: Implications of Program Restructuring and Other Recent Developments on Key Aspects of DOD’s Prior Alternate Engine Analyses. GAO-11-903R. Washington, D.C.: September 14, 2011. Joint Strike Fighter: Restructuring Places Program on Firmer Footing, but Progress Is Still Lagging. GAO-11-677T. Washington, D.C.: May 19, 2011. Joint Strike Fighter: Restructuring Places Program on Firmer Footing, but Progress Still Lags. GAO-11-325. Washington, D.C.: April 7, 2011. Joint Strike Fighter: Restructuring Should Improve Outcomes, but Progress Is Still Lagging Overall. GAO-11-450T. Washington, D.C.: March 15, 2011. Tactical Aircraft: Air Force Fighter Force Structure Reports Generally Addressed Congressional Mandates, but Reflected Dated Plans and Guidance, and Limited Analyses. GAO-11-323R. Washington, D.C.: February 24, 2011. Defense Management: DOD Needs to Monitor and Assess Corrective Actions Resulting from Its Corrosion Study of the F-35 Joint Strike Fighter. GAO-11-171R. Washington D.C.: December 16, 2010. Joint Strike Fighter: Assessment of DOD’s Funding Projection for the F136 Alternate Engine. GAO-10-1020R. Washington, D.C.: September 15, 2010. Tactical Aircraft: DOD’s Ability to Meet Future Requirements is Uncertain, with Key Analyses Needed to Inform Upcoming Investment Decisions. GAO-10-789. Washington, D.C.: July 29, 2010. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-10-388SP. Washington, D.C.: March 30, 2010. Joint Strike Fighter: Significant Challenges and Decisions Ahead. GAO-10-478T. Washington, D.C.: March 24, 2010. Joint Strike Fighter: Additional Costs and Delays Risk Not Meeting Warfighter Requirements on Time. GAO-10-382. Washington, D.C.: March 19, 2010. Joint Strike Fighter: Significant Challenges Remain as DOD Restructures Program. GAO-10-520T. Washington, D.C.: March 11, 2010. Joint Strike Fighter: Strong Risk Management Essential as Program Enters Most Challenging Phase. GAO-09-711T. Washington, D.C.: May 20, 2009. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-09-326SP. Washington, D.C.: March 30, 2009. Joint Strike Fighter: Accelerating Procurement before Completing Development Increases the Government’s Financial Risk. GAO-09-303. Washington D.C.: March 12, 2009. Defense Acquisitions: Better Weapon Program Outcomes Require Discipline, Accountability, and Fundamental Changes in the Acquisition Environment. GAO-08-782T. Washington, D.C.: June 3, 2008. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-08-467SP. Washington, D.C.: March 31, 2008. Joint Strike Fighter: Impact of Recent Decisions on Program Risks. GAO-08-569T. Washington, D.C.: March 11, 2008. Joint Strike Fighter: Recent Decisions by DOD Add to Program Risks. GAO-08-388. Washington, D.C.: March 11, 2008. Tactical Aircraft: DOD Needs a Joint and Integrated Investment Strategy. GAO-07-415. Washington, D.C.: April 2, 2007. Defense Acquisitions: Analysis of Costs for the Joint Strike Fighter Engine Program. GAO-07-656T. Washington, D.C.: March 22, 2007. Joint Strike Fighter: Progress Made and Challenges Remain. GAO-07-360. Washington, D.C.: March 15, 2007. Tactical Aircraft: DOD’s Cancellation of the Joint Strike Fighter Alternate Engine Program Was Not Based on a Comprehensive Analysis. GAO-06-717R. Washington, D.C.: May 22, 2006. Defense Acquisitions: Major Weapon Systems Continue to Experience Cost and Schedule Problems under DOD’s Revised Policy. GAO-06-368. Washington, D.C.: April 13, 2006. Defense Acquisitions: Actions Needed to Get Better Results on Weapons Systems Investments. GAO-06-585T. Washington, D.C.: April 5, 2006. Tactical Aircraft: Recapitalization Goals Are Not Supported by Knowledge-Based F-22A and JSF Business Cases. GAO-06-487T. Washington, D.C.: March 16, 2006. Joint Strike Fighter: DOD Plans to Enter Production before Testing Demonstrates Acceptable Performance. GAO-06-356. Washington, D.C.: March 15, 2006. Joint Strike Fighter: Management of the Technology Transfer Process. GAO-06-364. Washington, D.C.: March 14, 2006. Tactical Aircraft: F/A-22 and JSF Acquisition Plans and Implications for Tactical Aircraft Modernization. GAO-05-519T. Washington, D.C: April 6, 2005. Tactical Aircraft: Opportunity to Reduce Risks in the Joint Strike Fighter Program with Different Acquisition Strategy. GAO-05-271. Washington, D.C.: March 15, 2005.
With estimated acquisition costs of nearly $400 billion, the F-35 Lightning II—also known as the Joint Strike Fighter—is DOD's most costly and ambitious acquisition program. The U.S. portion of the program will require annual acquisition funding of $12.4 billion on average through 2038 to complete development and procure a total of 2,457 aircraft. GAO's prior work has found that the program has experienced significant cost, schedule, and performance problems. In 2009, Congress mandated that GAO review the F-35 acquisition program annually for 6 years. This report, GAO's sixth, assesses the program's (1) development and testing progress, (2) cost and affordability, and (3) manufacturing and supply chain performance. GAO reviewed and analyzed the latest available manufacturing, cost, testing, and performance data through December 2014; program test plans; and internal DOD analyses; and interviewed DOD, program, engine and aircraft contractor officials. The F-35 Joint Strike Fighter program had to make unexpected changes to its development and test plans over the last year, largely in response to a structural failure on a durability test aircraft, an engine failure, and software challenges. At the same time, engine reliability is poor and has a long way to go to meet program goals. With nearly 2 years and 40 percent of developmental testing to go, more technical problems are likely. Addressing new problems and improving engine reliability may require additional design changes and retrofits. Meanwhile, the Department of Defense (DOD) has plans to increase annual aircraft procurement from 38 to 90 over the next 5 years. As GAO has previously reported, increasing production while concurrently developing and testing creates risk and could result in additional cost growth and schedule delays in the future. Cost and affordability challenges remain. DOD plans to significantly increase annual F-35 funding from around $8 billion to nearly $12 billion over the next 5 years (see figure) reaching $14 billion in 2022 and remaining between $14 and $15 billion for nearly a decade. Over the last year, DOD reduced near-term aircraft procurement by 4 aircraft, largely due to budget constraints. While these deferrals may lower annual near-term funding needs, they will likely increase the cost of aircraft procured in that time frame and may increase funding liability in the future. It is unlikely the program will be able to sustain such a high level of annual funding and if required funding levels are not reached, the program's procurement plan may not be affordable. DOD policy requires affordability analyses to inform long-term investment decisions. The consistent changes in F-35 procurement plans indicate that DOD's prior analyses did not adequately account for future technical and funding uncertainty. Manufacturing progress continued despite mixed supplier performance. The aircraft contractor delivered 36 aircraft as planned in 2014, despite a fleet grounding, added inspections, and software delays. In contrast, the labor hours needed to manufacture an aircraft and the number of major design changes have continued to decline over time. Because supplier performance has been mixed, late aircraft and engine part deliveries could pose a risk to the program's plans to increase production. The contractors are taking steps to address these issues. GAO recommends that DOD assess the affordability of F-35's current procurement plan that reflects various assumptions about technical progress and future funding.
For audit purposes, IRS’ Examination Division defines a corporation as large or small depending on the amount of assets reported on its income tax return. Small corporations (about 2.4 million) are defined as those that report total assets of less than $10 million. Corporations reporting higher assets are considered to be large, and IRS audits large corporations in two groups. IRS annually selects about 1,700 of the largest and most complex corporations for CEP. The remaining large corporations (about 45,000) may be audited under a separate IRS program—the subject of this report. IRS has different ways to select corporations for audits. For small corporations, IRS uses a formula that measures the likelihood of changes to tax liability. This formula helps IRS objectively select returns for audits that are considered to be most likely to produce tax changes. IRS developed the formula by analyzing results of line-by-line audits of a random sample of tax returns. Once selected, a small corporation return is usually audited by an IRS revenue agent. In fiscal year 1994, IRS audited about 44,000 (1.8 percent) of 2.4 million income tax returns filed by small corporations. For CEP, IRS selects corporations on the basis of criteria for size, complexity, and the like. After considering its audit resources and manually reviewing the audit potential of every CEP return, IRS selects returns for audit. IRS audits CEP returns with teams of revenue agents and specialists, such as economists and engineers. Our 1992 report on CEP noted that IRS audited about 77 percent of the CEP returns for fiscal year 1991. The remaining large corporations (hereafter referred to as large corporations) are usually selected for audit on the basis of IRS agents’ judgment, rather than through a scoring formula or specific criteria. In some cases, revenue agents at a service center select returns and send them to a district office to be audited; in other cases, all relevant returns are sent to the district office, where revenue agents choose those they will audit. Unlike with CEP returns, IRS usually uses a revenue agent (hereafter referred to as auditor) rather than a team to audit the returns from this segment of the large corporation universe. According to IRS Examination officials, these individual auditors recently have been using IRS specialists more than they have in the past. To give perspective on the sizes of these large corporations compared to other types of businesses, we analyzed average assets reported for 1992 (the most recent year of available data at IRS’ Statistics of Income Division). Reported assets ranged from an average of about $0.4 million by small corporations to about $6.8 billion by CEP corporations. Within this wide range, the remaining large corporations reported an average of $130.7 million in assets as compared with partnerships, which reported an average of $1.3 million in assets. In recording the audit results for large corporations not part of CEP, IRS has created four classes according to asset size, ranging from $10 million to over $250 million. To facilitate our reporting of trends, we collapsed the four classes into two, (1) lower asset ($10 million to less than $100 million) and (2) higher asset ($100 million and over). Narrative in this letter and appendix II focuses on the differences in the trends for the two combined classes but also discusses the four classes, particularly their assessment rates. Our objectives were to (1) analyze audit trends in fiscal years 1988 through 1994 for large corporations, (2) compute their assessment rate, and (3) develop and compare profiles of audited large corporations with those not audited. To identify large corporations, we used IRS data on those reporting assets of $10 million and more. We used IRS data to eliminate CEP corporations. We used three IRS databases to meet our objectives. To analyze audit trends, we used Audit Information Management System (AIMS) data on large corporate audits closed in fiscal years 1988 through 1994. To compute the assessment rate, we computer-matched the AIMS data on recommended tax assessments to actual tax assessments on the Business Master File (BMF), which contains information about business tax returns. We tracked the BMF data through December 1994. To develop a profile of the large corporations, we obtained the 1992 Statistics of Income (SOI) file for corporations—the most recent at the time we did our work. We matched the SOI and AIMS data to divide our population into audited and not audited groups. We asked IRS Examination officials at the National Office to review our analyses of the audit trends and assessment rates and to provide any explanations. We performed our work in Washington, D.C., and Mission, KS, between May 1994 and May 1995 in accordance with generally accepted government auditing standards. Appendix I has more information on our objectives, scope, and methodology. Figure 1 summarizes trends in large corporate audits for fiscal years 1988 through 1994. More details on these trends follow. 1. Number of audits: The total number of audited returns increased about 3 percent (from 10,062 in 1988 to 10,392 in 1994), after peaking in 1991 at 11,962. This increase largely involved corporations with less than $50 million in assets. The number of audited returns for higher asset corporations fluctuated but decreased 16 percent between 1988 and 1994, particularly in 1989 and 1990. (Refer to table II.1.) 2. Audit coverage: Audit coverage rose from 23 percent in 1988 to 24 percent in 1994, after peaking at 28 percent in 1991. Coverage varied by asset class. It decreased (especially from 1991 to 1992) from 43 percent to 31 percent for higher asset corporations, and it increased from 18 percent to 22 percent for those with lower assets after peaking in 1991 at 25 percent. (Refer to table II.2.) 3. Direct audit hours: A comparison of 1988 and 1994 shows that IRS invested 25 percent more hours in auditing large corporations, particularly those with lower assets. Audit hours decreased for higher asset corporations, particularly 1988 through 1990. (Refer to table II.3.) 4. Direct audit hours per return: This ratio increased 21 percent from 1988 to 1994, driven by audits of lower asset corporations. Their ratio increased 43 percent compared to 14 percent for higher asset corporations. On average, IRS spent twice as long auditing a return with higher assets compared to those with lower assets—184 hours versus 89 hours, respectively. (Refer to table II.4.) 5. Additional recommended taxes: This amount increased 16 percent from about $1.6 billion in 1988 to $1.9 billion in 1994. It also increased for both types of corporations. The amount peaked in 1992 for lower asset corporations and fluctuated year to year for higher asset corporations while peaking in 1993. IRS Examination officials explained that a few large audits produced the peaks in 1991 and 1993. In 1994 constant dollars, however, recommended taxes decreased 4 percent between 1988 and 1994. (Refer to table II.5 for current dollar data on additional taxes recommended and table V.1 for 1994 constant dollars.) 6. Additional recommended taxes per return: As with recommended taxes, this ratio increased overall and for both categories of corporations. It rose 12 percent from about $160,000 in 1988 to $180,000 in 1994 (after reaching $242,000 in 1993). Higher asset corporations drove this increase as its ratio increased 38 percent after declining in 1992 and 1994. Although IRS audited fewer of their returns, IRS recommended a relatively higher amount of taxes. In 1994 constant dollars, recommended taxes per return decreased 7 percent between 1988 and 1994. (Refer to table II.6 and table V.3 for details.) 7. Additional recommended taxes per audit hour: The overall ratio decreased 7 percent from $1,409 in 1988 to $1,313 per hour in 1994. Although the ratio rose for corporations with higher assets, this rise was more than offset by a declining ratio for those with lower assets. In 1994 constant dollars, the overall ratio decreased 23 percent from 1988 to 1994. (Refer to tables II.7 and V.4.) IRS Examination officials offered reasons for the increase in direct audit hours outpacing increases in audit coverage and recommended taxes after 1988. IRS has been auditing more complex returns and using more IRS specialists. Both steps took more time and reduced coverage. Also, many auditors have needed more time as they gradually shifted from auditing corporate tax shelters to auditing the whole large corporation return. On the other hand, recommended amounts can be reduced by economic downturns. They also cited 1986 tax law changes that took away audit issues (e.g., investment tax credit) with relatively high yield for a small time investment and that lowered corporate tax rates, affecting additional recommended taxes in later years. We also analyzed trends in audit closures. The large corporations agreed with higher portions of recommended tax amounts from 1988 to 1994 (34 percent by 1994), but they continued to appeal most amounts (66 percent by 1994). Although they had similar trends over the 7 years, higher asset corporations agreed, on average, with 21 percent of all recommended amounts while lower asset corporations agreed with 33 percent (see table II.9). Even so, large corporations increasingly agreed with most audits (52 percent for 1994) that recommended taxes (see table II.8). In sum, they tended to agree with small tax amounts recommended in many audits but appeal larger amounts recommended in fewer audits. Audits that end with no change to taxes owed could have adjustments (e.g., reduced a reported net operating loss but not enough to produce a tax liability) or no adjustments. IRS views the former as productive and the latter as unproductive. The no-change trends differed. Those with adjustments dropped from 28 percent to 16 percent; those without adjustments increased from 8 to 16 percent. The without adjustments rate had reached 18 percent for lower asset corporations and 10 percent for those with higher assets by 1994. IRS Examination officials said they would like to see the no-change rate without adjustments fall below 10 percent. They believed that this rate will start falling as IRS closes ongoing audits that they viewed as more productive. For example, they believed that their investment in audits of more complex returns will start shifting more no-change audits to audits that recommend taxes. Also, more auditors have now learned how to audit large corporations, not just tax shelters. Even so, these officials still want better systems for selecting and classifying (i.e., finding issues that need to be audited) returns. In recognizing this need, they convened task forces during 1994 to overcome such problems with large corporation audits. These task forces are slated to last through 1996. See appendix II for detailed information about these trends and IRS’ explanations and appendix V for trends for recommended tax amounts in constant dollars. In tracking taxes recommended for large corporations from 1988 through 1994, we found that the final assessment had been recorded for $8.6 billion of $12 billion in net recommended taxes. Our computer match, involving about 56,000 audited returns, showed that IRS assessed $2.3 billion of the $8.6 billion (27 percent) through December 1994. In computing this rate, we subtracted the tax refunds recommended from the additional taxes recommended for these audited returns. The assessment rate was similar for higher and lower asset corporations—26 percent and 28 percent, respectively. The rate, however, differed widely by the four asset classes, ranging from 20 percent to 38 percent. By IRS district, the assessment rate ranged from over 100 percent to less than 1 percent. The reasons for these wide variations were not apparent in the IRS databases we used to compute the assessment rates. Our work has shown that various factors can cause the rate to exceed 100 percent, such as IRS Appeals assessing more taxes than recommended by the auditors. Also, the rates can drop whenever corporate claims for refunds or net operating losses from other tax years reduce or offset taxes that were recommended in the audit. Nor did IRS Examination officials know the reasons for the wide variation in the rates. They noted that the lowest rates occurred in two regions and they are starting to pinpoint the reasons. We also plan to explore these reasons during a follow-on review. Our discussions with IRS Examination officials disclosed possible reasons for low rates. These officials pointed to nonaudit factors that can lower the assessment rate, even if auditors supported the taxes recommended. They cited retroactive tax law changes, court decisions that affect the recommended taxes, and other tax abatements. In addition, they said IRS Appeals can concede recommended taxes to avoid the hazards of litigation or because the corporation provides new information that swayed Appeals’ decision; this information could have dissuaded the auditors from recommending the taxes. These officials did not know the extent to which these factors lowered the assessment rate, given limitations in IRS’ databases. For this reason, our 1994 report on CEP recommended corrections to IRS’ databases. In sum, IRS Examination officials cautioned against misinterpreting the assessment rates. Because of these nonaudit factors, they believed that the rates reflect more about the tax system and economic fluctuations than the effectiveness of the audits. Appendix III provides details on assessment rates and IRS’ explanations. We also computed the assessment rate for just the additional taxes recommended (i.e., excluding audits recommending refunds). That rate equalled 38 percent. IRS had estimated a similar assessment rate on just the additional taxes recommended for audits closed in fiscal years 1992 through 1994—36 percent. Regardless of which type of assessment rate is considered, we did not attempt to track how much of the assessed taxes were ultimately collected. IRS Finance officials provided data indicating that IRS collected 23 percent of the taxes recommended and 68 percent of the taxes assessed as of July 1995 for the audits closed in fiscal years 1992 through 1994. IRS based these results on data being tracked in a new system. We plan to analyze the data and methodology being used in this system during the second phase of our work. Whenever audits recommend additional taxes that go unassessed, IRS can miss opportunities to invest audit resources more productively, and large corporations can incur more costs to challenge those recommendations. Data on many of these costs were not available. Using only the direct audit costs, we calculated that IRS recommended $56 and assessed $15 in taxes for each dollar directly spent on auditing large corporations from 1988 through 1994. These calculations exclude indirect audit costs (e.g., overhead), IRS costs outside of audits (e.g., appeals and litigation processes to settle on assessed tax amounts), and corporations’ costs. It is important to recognize that these ratios provide just one indicator of IRS’ audit activities. The ratios do not account for the costs and taxes associated with what IRS calls revenue protection. For example, IRS may audit various corporate claims for tax refunds to determine whether the claims are proper. In doing so, IRS protects the government’s revenue. Auditors disallowed $202 million in claims by large corporations during 1994 in addition to the $1.9 billion they recommended in taxes. In 1991—the first year for which IRS tracked protected tax revenue—IRS auditors denied $212 million of these claims. To provide perspective, we computed a similar ratio for all IRS audits. Although not readily available for assessed taxes, data were available in IRS’ 1996 budget to compute the ratio of recommended taxes to the costs of all IRS audits. Our computations showed that the ratio has been about $16 in recommended taxes to $1 in audit costs (including indirect costs) for recent years. According to 1992 income tax returns, over 60 percent of the large corporations were engaged in manufacturing or in the finance/insurance industry. This profile was similar for both the audited and nonaudited large corporations. Audited large corporations, however, tended to report higher amounts, on average, of total income, taxable income, and income tax liability. Whether audited or not, large corporations tended to claim the possessions tax credit more frequently than other tax credits; 57 percent of $6.4 billion in tax credits claimed was for the possessions tax credit. Appendix IV provides more details on the profile of large corporations for 1992. We requested comments on a draft of this report from the IRS Commissioner, and we received comments from her representatives at a meeting on August 9, 1995. These IRS officials included the Assistant Commissioner for Examination and his staff that oversee audits of large corporations as well as staff from IRS’ Office of Legislative Affairs. While generally agreeing with the trends we analyzed, these officials had comments on our draft. In addition to technical comments that we have incorporated where appropriate, they offered comments on three major issues. First, they pointed to various efforts undertaken to correct problems with large corporation audits. The major effort entails studying ways to improve the selection of returns for audit. Our letter now refers to these efforts. Second, they suggested explanations for some trends. For example, they offered various reasons for the increases in audit coverage and additional recommended taxes lagging behind the increase in direct audit time. They noted that IRS has been investing time in auditing more complex issues and in using IRS specialists. They viewed this investment as necessary and as likely to pay off soon. They also cited tax law changes in 1986 and the transition in the early 1990s from auditing corporate tax shelters to all large corporate tax issues. Both factors had dampening effects on recommended tax amounts after 1988, according to these officials. They suggested that these factors, in combination with auditing more complex returns, also contributed to IRS closing more audits with neither changes to taxes owed nor adjustments to taxable income. While we did not validate IRS’ suggested explanations, we have added them to the letter and related appendixes. Third, they asked for clarification on the 27 percent assessment rate. Although our draft report had not labelled this rate as a measure of audit effectiveness, they wanted cautions noted. They said the rate should not be used as such a measure because of nonaudit factors (e.g., net operating losses from other tax years that offset audit yield). They did not know the extent to which these factors affected the rate but they believed that the rate was just as likely to be the product of the tax and economic systems, which they have little control over, rather than of the audits. We have added their comments about the potential effects of these nonaudit factors on the rates. We are sending copies of this report to the Senate Committee on Finance, the House Committee on Ways and Means, and other interested parties. Major contributors to this report are listed in appendix VI. If you or your staff have any questions concerning this report, please contact me at (202) 512-5407. Our objectives were to (1) analyze audit trends for large corporations not in the Coordinated Examination Program (CEP) for fiscal years 1988 through 1994, (2) compute the portion of taxes recommended in audits that were actually assessed, and (3) develop and compare profiles of audited large corporations with those not audited. The Internal Revenue Service (IRS) defines large corporations as those reporting assets of $10 million or more on their income tax returns. IRS divides large corporations into four asset classes as follows: (1)assets of $10 million to less than $50 million, (2)assets of $50 million to less than $100 million, (3)assets of $100 million to less than $250 million, and (4)assets of $250 million and over. For these corporations, our analyses focused on data from Forms 1120 (U.S. Corporation Income Tax Return) and other related corporate returns, except for nontaxable returns such as the Form 1120-S. These related income tax returns included the following: (1)1120-L (U.S. Life Insurance Company Income Tax Return), (2)1120-PC (U.S. Property and Casualty Insurance Company Income Tax Return), (3)1120 Consolidated income tax return, (4)1120L Section 594/1504c income tax return for U.S. life insurance companies, (5)1120-PC Section 1504c income tax return for U.S. property and casualty insurance companies, (6)1120 Section 594/1504c income tax return for U.S. corporations. Our analyses of audit trends, assessment rates, and the profiles excluded large corporations in CEP. We excluded CEP corporations from the profile information using IRS data on CEP. We asked IRS Examination officials at the National Office to review our analysis of the audit trends and assessment rates and to provide any explanations. We have summarized their comments throughout this report. To analyze audit trends, we used IRS’ Audit Information Management System (AIMS) data. This database includes records from all audits closed during a given fiscal year. We reconciled totals from this database to totals in IRS’ annual report. For audits closed from fiscal years 1988 through 1994, we obtained AIMS data on additional tax recommended, tax decreases recommended, returns audited, and direct audit hours spent on returns. We then calculated such measures as tax recommended per return, tax recommended per hour, and audit hours per return. Appendix II reports these trends, using current dollars for the recommended tax amounts. Appendix V reports those trends in constant dollars. To calculate audit coverage, we used IRS’ method of dividing the number of audits completed in a given fiscal year by the number of returns filed the previous year. We also computed IRS’ direct costs for auditing these returns. For audited corporations by asset class, we applied the average cost IRS calculated for fiscal years 1991 and 1992 for each staff year that IRS directly spent on these audits. We adjusted the costs to current dollars for the specific fiscal year of the audit to obtain the average cost for each of the 7 years we analyzed. We also analyzed the ways in which IRS closed audits of the tax returns. If IRS recommended additional taxes, large corporations could agree to pay or appeal these taxes. If IRS did not recommend such taxes, we analyzed how often IRS closed these no-change audits without any audit adjustments or with adjustments. To compute the assessment rate—the percentage of recommended taxes ultimately assessed after audits—we did a computer data match of corporate income tax returns between two IRS databases. For all closed audits in our populations, we matched the recommended tax assessments recorded on AIMS to the actual tax assessments recorded on the Business Master File (BMF). In addition to the assessed tax liabilities, the BMF contains information on taxable income, taxes not yet paid, penalties, interest, payments, refunds, and audit actions for business tax returns. In both systems, each record contains the taxpayer identification number (TIN), tax year, and return type. To use BMF data, we eliminated all BMF records of tax returns that had no audit adjustment code. We also eliminated all records with audit transactions that were posted before fiscal year 1988. Because our AIMS data covered fiscal years 1988 through 1994, none of these audit adjustments could have been posted on BMF before fiscal year 1988. Also, we applied our criterion of a “completed audit.” We defined this term as the period in which IRS made at least one tax adjustment resulting from an audit, followed by an audit release indicator. As the starting point, we used the last day of the previous audit period or, if not present, the date that IRS posted the return. The BMF audit release indicator identified the end of an audit. We added 30 calendar days to the audit release date to identify late posting audit adjustments. IRS also does this adjustment on its new Enforcement Revenue Information System to match tax adjustments to taxes recommended. Using the 25,395 taxpayers identified in AIMS data for fiscal years 1988 through 1994, we were able to match 22,679 TINs to BMF. For these TINs, we obtained records for 56,146 returns for various tax years ranging from 1964 to 1993. AIMS has the recommended tax adjustments for each closed audit. We dropped records that showed recommended taxes of $1 because some IRS districts use this amount if, for some reason, they must close the case on AIMS for a second time. Across the AIMS and BMF data, we sought the same corporate TINs and same audited tax years for audits closed during fiscal years 1988 through 1994. We matched the AIMS data on recommended assessments from these audits to BMF data on actual assessments for these audits up through December 1994. We then analyzed the assessment rate by variables such as the asset size of the corporation and IRS district office that did the audit. To develop a profile of the large corporations, we obtained the 1992 Statistics of Income (SOI) file for corporations—the most recent file when we did our work. We eliminated CEP corporations as we did for our other analyses. We selected the large corporations by using our criteria for return type and the asset size. We matched these data with AIMS data to divide the large corporation population into audited and nonaudited groups. Table I.1 shows the SOI universe and populations for each of these steps. Sampling errors associated with our SOI estimates are less than 5 percent at the 95 percent confidence level, except for the following items: For audited lower asset size corporations claiming the net operating loss deduction, the sampling errors were $1.3 billion + 6.8 percent for the net operating loss deduction claimed and $1 million + 5.9 percent for the average deduction claimed. For the nonaudited corporations, the average Foreign Tax Credit claimed as shown in table IV.2 had a sampling error of $1.593 million + 5.8 percent. For the other tax credits reported in table IV.2, the sampling errors exceeded 5 percent for both audited and nonaudited corporations. The sampling errors for the total amount claimed were $13.9 million + 5.7 percent and $11 million + 8.9 percent for audited and nonaudited corporations, respectively. The sampling error for the average amount claimed is as follows: $179 thousand + 16.6 percent and $225 thousand + 21.4 percent for audited and nonaudited corporations, respectively. This appendix presents our analysis of IRS’ audit results for large corporations, using IRS’ AIMS data. We asked IRS Examination officials to explain any major shifts in the trends. The narrative within this appendix reflects any explanations that these officials provided. $50 mil. $100 mil. < $100 mil. < $250 mil. From 1988 to 1994, the number of returns audited has increased slightly, particularly from 1988 to 1991, when it peaked at 11,962 returns and then decreased to 10,392 by 1994. IRS Examination officials attributed the decline since 1991 to auditing more complex returns and issues. Doing so takes more time that could have been spent on more returns. Corporations with less than $50 million in assets accounted for the overall increase in audits during the 7 years. All other asset classes had decreases. Over the 7 years, the number of returns audited averaged 10,674. Of these audited returns, lower asset corporations filed 72 percent (7,704 returns) on average. $50 mil. $100 mil. < $100 mil. Audit coverage lower asset size < $250 mil. Audit coverage higher asset size Averages are computed using actual data points and may not equal the averages of the whole numbers in the columns. For 1988 through 1994, audit coverage changed as follows: It increased from 18 percent to 22 percent, peaking in 1991, for lower asset corporations. This increase stemmed from IRS doing more audits while the number of returns filed remained fairly constant over the 7 years. In fact, the audit rate increased only for corporations with assets of $10 million to less than $50 million. It decreased from 45 percent to 31 percent for the higher asset classes. Their rate held fairly steady through 1991 but then dropped through 1994. Over all 7 years, their coverage decreased because more returns were filed but fewer were audited. IRS Examination officials said IRS has spent more time on complex audits since 1991. It averaged 25 percent overall. As asset size increased, so did the average coverage rate. Over the four asset classes, the average rate ranged from 20 percent to 44 percent. $50 mil. $100 mil. < $100 mil. < $250 mil. Time spent directly on the audit is measured in hours. After dropping from 1988 to 1989, the audit hours steadily increased about 40 percent from fiscal years 1989 through 1994. This change represents an increase from about 1 million to 1.4 million staff hours. IRS Examination officials attributed the increase to investing in audits of more complex tax returns and issues. The increased hours primarily arose from doing more audits of lower asset corporations over the 7 years (see fig. II.1). The audit time for these corporations increased from 487,162 hours in 1988 to 793,115 hours in 1994 (63 percent) after peaking in 1991 and then decreasing slightly through 1994. Audit hours for higher asset corporations decreased from 654,974 hours in 1988 to 628,851 hours in 1994 (4 percent); their hours decreased from 1988 to 1990 and then increased steadily through 1994. $50 mil. $100 mil. < $100 mil. < $250 mil. $250 mil. & over Averages are computed using actual data points and may not equal the averages of the whole numbers in the columns. On average, IRS auditors spent twice as long auditing returns from higher asset corporations compared to those with lower assets—184 hours versus 89 hours per return, respectively. From 1988 to 1994, the direct audit hours per return increased 43 percent (74 to 107 hours) for lower asset corporations, 14 percent (187 to 213 hours) for higher asset corporations, and 21 percent (114 to 137 hours) overall. These increases match the increases in direct audit hours (see table II.3) and in audited returns for lower asset corporations and all large corporations (see table II.1). However, the number of direct audit hours and audited returns decreased for higher asset large corporations. The increase in direct audit hours per return results from a larger decrease in the number of audited returns (see table II.1) compared to the number of audit hours (see table II.3). IRS Examination officials said they expected the upward trend in audit hours per return to continue as IRS does more complex audits. They also cited other reasons. The lack of training and experience in auditing an entire large corporation return instead of just corporate tax shelters has added time. $50 mil. $100 mil. < $100 mil. < $250 mil. For all large corporations, additional recommended taxes grew 16 percent from about $1.6 billion in 1988 to about $1.9 billion in 1994; except for 1989, these amounts had increased through 1993 and then dropped in 1994. IRS Examination officials did not know the reasons for the 1994 decrease. Both higher and lower asset corporations had similar increases over the 7 years but those with higher assets always accounted for the bulk of the additional tax amounts. For higher asset corporations, recommended taxes peaked in 1993—about double the amount from 1989—but then decreased 35 percent by 1994. IRS Examination officials attributed the big increases in 1991 and in 1993 to a few large dollar audits. For lower asset corporations, recommended taxes increased 53 percent from 1988 to 1992 but then decreased 24 percent through 1994. All four asset classes had percentage increases from 1988 to 1994 in taxes recommended; the $100 million to less than $250 million class had the greatest increase (42 percent), while the $250 million and over class had the smallest (5 percent). Each class also had fluctuations over the 7 years and different peak years, ranging from 1990 to 1993. Table II.5.1: Net Additional Taxes Recommended for Large Corporations, Fiscal Years 1988 Through 1994 $50 mil. $100 mil. < $100 mil. < $250 mil. IRS’ audit mission is to determine the correct tax liability. This includes determining additional taxes that taxpayers owe or that should be refunded to the taxpayers. Although IRS collects the data, IRS reports on audit results did not offset recommended taxes by recommended tax refunds. Our analysis of IRS data showed that subtracting refunds from reported additional taxes recommended would reduce additional taxes between 8 to 34 percent over the 7 years. $50 mil. < $100 mil. $100 mil. < $250 mil. $250 mil. & over Averages are completed using actual data points and may not equal the averages of the whole numbers in the columns. Over the 7 years, the additional taxes recommended per audited return averaged about $189,000 for all large corporations. Audits of higher asset corporations drove this average; these audits averaged about $444,000. A comparison of 1988 to 1994 showed that the ratio of recommended taxes per return increased for lower and higher asset corporations but at different rates and with different fluctuations, as follows: For lower asset corporations, the ratio increased just 3 percent. This ratio increased about 43 percent between 1990 and 1992 and then decreased through 1994. For higher asset corporations, the ratio increased 38 percent from 1988 to 1994 after fluctuations. This ratio increased from 1988 to 1990, flattened out for 1990 through 1992, increased in 1993, and then dropped 36 percent in 1994. As noted after table II.5, a few large cases drove the 1991 and 1993 results according to the IRS officials. $50 mil. $100 mil. < $100 mil. < $250 mil. $250 mil. & over Averages are computed using actual data points and may not equal the averages of the whole numbers in the columns. A comparison of 1988 to 1994 showed that taxes recommended per audit hour decreased 7 percent. This ratio increased from 1988 to 1990 but then fluctuated through 1994. It increased for higher asset corporations and decreased for lower asset corporations. More specifically, this ratio: increased for higher asset corporations because audits of those with assets of (1) $100 million to less than $250 million recommended more taxes for a proportionately smaller increase in audit hours, and (2) $250 million or more spent fewer audit hours to recommend a slight increase in the amount of taxes; and decreased for lower asset corporations because audits of those with assets of (1) $10 million to less than $50 million required more time to recommend less tax, and (2) $50 million to less than $100 million invested comparatively higher amounts of audit time to recommend higher tax amounts. IRS Examination officials pointed to a few large audits as major contributors to the 1991 and 1993 results. Our analysis of how IRS closed audits of large corporations revealed two distinct trends from 1988 through 1994. These trends were similar for lower and higher asset corporations. First, large corporations appealed a smaller percentage of the returns that recommended additional taxes and agreed with a higher percentage of these returns. A comparison of 1988 to 1994 showed that higher asset corporations reduced their appeal rate from 33 percent to 19 percent of the audited returns; this rate was 28 percent in 1993. Lower asset corporations reduced this rate from 24 percent to 16 percent of the returns. Second, the two types of no-change rates moved in different directions over the 7 years. The rate with audit adjustments steadily declined and the rate without audit adjustments slowly increased. The rate without adjustments increased for both types of corporations over the 7 years—a 100-percent increase for higher asset corporations and an 80-percent increase for lower asset corporations. Conversely, the rate with adjustments decreased by over 40 percent for both higher and lower asset corporations. Further, a comparison of 1988 rates to 1994 rates showed that the no-change rate without adjustments for lower asset corporations was about twice the rate for higher asset corporations. Over the 7 years, the overall no-change rate averaged 34 percent for lower asset corporations and 28 percent for higher asset corporations. IRS Examination officials noted these increases in the no-change rate without adjustments. For all large corporations, this rate was 16 percent in 1994; this rate had been at or above 10 percent for lower asset corporations and had reached 10 percent for higher asset corporations by 1994. These officials said a more satisfactory rate would be less than 10 percent within all IRS regions. They recognized the need to be more selective in placing returns into the audit stream. They noted that this will only be accomplished by universally using a process that better selects returns for audit and that then identifies issues on those returns that need to be audited. They pointed to a task force that IRS convened in 1994 to develop the universal process across IRS. As with the trends in closing audited returns, a comparison of 1988 to 1994 showed that the large corporations appealed less and agreed with more of the recommended tax amounts. Unlike with the return trends, the large corporations appealed the majority of these amounts (about two-thirds by 1994). Further, the larger the asset size, the more likely that the large corporation would appeal the recommended taxes rather than agree to pay them. On average, higher asset corporations appealed 79 percent of the recommended tax amounts compared to 67 percent for those with lower assets over the 7 years. Table III.1 shows our computation of the assessment rate on net tax recommendations—recommended additional taxes less recommended tax decreases. IRS assessed $2.33 billion (27 percent) of the $8.64 billion in taxes recommended. IRS made the assessments through December 1994 for audits closed in fiscal years 1988 through 1994. $10 mil. < $50 mil. $50 mil. < $100 mil. The higher and lower asset corporations had similar assessment rates—26 percent and 28 percent, respectively. By asset class, the rates ranged from 20 percent to 38 percent. The 38 percent rate was driven by audits in the Manhattan District Office. Manhattan accounted for $467.9 million (23 percent) of taxes recommended and $372.7 million (48 percent) of taxes assessed in that asset class. IRS Examination officials itemized factors outside of the audits that depressed the assessment rate. For this reason, they cautioned against using the rate to measure audit effectiveness. They pointed to net operating losses that large corporations carried over to offset taxes recommended as well as offsets or reductions by claims for refunds, abatements, and retroactive tax law changes and court decisions. They also said recommended taxes can be lost in Appeals due to hazards of litigation and to large corporations withholding tax data until then. If IRS auditors had had these data, they would have been less likely to recommend taxes. These officials did not know the extent to which these nonaudit factors affected the assessment rate. IRS reports audit results by the gross recommended additional taxes. Table III.2 shows the assessment rate when recommended tax increases and tax decreases are not netted. $10 mil. < $50 mil. $50 mil. < $100 mil. Our analysis of just the gross additional taxes recommended showed a higher assessment rate overall (38 percent) and by asset class (32 percent to 45 percent) compared to the net rate. IRS has estimated similar gross assessment rates using data from fiscal years 1992 through 1994 versus our 7-year period. IRS’ rate was 36 percent and ranged from 24 percent to 54 percent by asset class. Table III.3 provides the net assessment rate for the districts that recommended at least $100 million in additional taxes. Across IRS’ 64 districts, 20 recommended net additional taxes of $100 million or more between fiscal years 1988 and 1994. Three of the 20 districts—San Francisco, Manhattan, and Los Angeles—each recommended about $500 million or more and together accounted for $1.6 billion of the $8.6 billion recommended by all districts. As table III.3 shows, the assessment rates varied widely across the districts. The rates reached as high as about 103 percent to less than 1 percent. By including all but two IRS districts doing large corporate audits, the lowest rate was negative 20 percent. Assessment rates that exceed 100 percent indicate that appeals officers assessed more taxes than recommended by revenue agents. This can occur when further adjustments increase tax liability while the case is under Appeals’ jurisdiction. For example, liability increases can occur when an adjustment on another tax year decreases a net operating loss carryback deduction on the tax years being appealed, a math error is found, or a taxpayer files an amended return increasing tax liability. Negative assessment rates occur when the appeals officer not only concedes all taxes recommended but also approves a tax refund because the taxpayer filed a claim for refund or the appeals process reduced the reported tax liability. For example, the appeals officer can decrease tax liability because of an error in the taxes recommended or an increase in a loss carryback deduction from another tax year to the tax year in Appeals. IRS Examination officials did not know the specific reasons for the lowest rates. They noted that two regions tended to account for these lowest rates. They planned to follow up with the regions to uncover the reasons and see whether actions need to be taken. Using gross additional taxes recommended, the assessment rate and ranking of the 20 districts changed slightly. The rate ranges from 90 percent to 3 percent. (See table III.4.) To develop a profile of large corporations, we used SOI data on large corporations that filed income tax returns for 1992. We split our profile into large corporations that were audited and not audited, using AIMS data. Within that framework, the elements we profiled included the type of industry, asset size, reported income and tax, and types of tax credits claimed. The majority of audited large corporations were engaged in finance, insurance and real estate (34 percent); manufacturing (28 percent); and wholesale trade (13 percent). The industry profile differs for the large corporations that were not audited. A higher percentage of them were involved in finance/insurance and real estate (52 percent), and a lower percentage were in manufacturing (17 percent) and wholesale trade (8 percent). The third-ranking industry involved services (9 percent). For both audited and nonaudited returns of the corporations involved in finance, insurance and real estate, the majority involved banks and credit agencies. In the manufacturing industry for audited returns, the corporations primarily manufactured electronic equipment, fabricated metals (such as metal cans and shipping containers), food, and chemicals. For returns not audited, the manufacturing corporations had a similar industry profile to those that were audited. They were largely involved in the same industries. In 1992, 79 percent of the large corporations reported assets of $10 million to $100 million (lower asset size), and 21 percent reported $100 million or more (higher asset size). Over 60 percent reported less than $50 million in assets. For all corporations, the average asset size was $131 million. The average asset size was $32 million for lower asset corporations and $510 million for higher asset corporations. Comparing those audited versus not audited, the results were similar. For example, 75 percent of the audited corporations and 81 percent of the nonaudited corporations were in the lower asset group; the rest were higher asset corporations. On average, the audited corporations reported $136 million in assets while those not audited reported $128 million in assets. In dollar amounts, the nonaudited large corporations reported more total income, taxable income, and income tax than the audited corporations. However, the audited large corporations reported higher average amounts in these categories. These higher average amounts varied by asset group. For example: Higher asset corporations that were audited reported higher average amounts in these categories than those not audited. Lower asset corporations that were audited reported much higher average amounts in these categories than those not audited. The reported average amounts by the audited group usually doubled or almost doubled these amounts for the nonaudited group, except for total income. For total income, the audited group reported about 59 percent more on average. Further analysis uncovered other results for 1992, as follows. Among all large corporations, the approximate $31.5 billion in reported net tax was about 30 percent of the approximate $106.8 billion in reported taxable income. The percentage of returns reporting zero taxable income and zero tax was lower for audited returns compared to nonaudited returns. For example, 22 percent of audited returns reported zero net tax compared to 36 percent for nonaudited returns. We also analyzed the net operating loss deduction that large corporations claimed to reduce their 1992 taxable income. Audited higher asset corporations claimed about $2.5 billion (average of about $4.9 million), and those not audited claimed about $6.6 billion (average of about $7.8 million). Among lower asset corporations, those audited claimed about $1.3 billion (average of about $1 million), and those not audited claimed about $3.9 billion (average of about $0.9 million). Less than 1 percent. For corporations audited, the possessions tax credit accounted for 47 percent of the total credits; the foreign tax credit represented another 30 percent. Both credits were claimed primarily by higher asset corporations. Among corporations not audited, the possessions tax credit accounted for 62 percent, and the foreign tax credit accounted for 26 percent of the total credits claimed. Because the possessions tax credit was claimed so much, we looked more closely at which types of audited and nonaudited large corporations claimed this credit. The differences were minor, as illustrated below. Among audited corporations, manufacturers claimed 98 percent; manufacturers of chemicals/drugs and food claimed 79 percent of the total. The majority of the corporations claiming this credit were in the higher asset group (90 percent). Among nonaudited corporations, manufacturers claimed 96 percent; manufacturers of chemicals/drugs and instruments and related products claimed 80 percent of the total. Further, 84 percent of the nonaudited corporations claiming the credit fell into the higher asset group. Table V.1: Additional Taxes Recommended Presented in Constant Dollars for Large Corporations, Fiscal Years 1988 Through 1994 $50 mil. $100 mil. < $100 mil. < $250 mil. In a comparison of 1988 to 1994, additional taxes recommended decreased 4 percent overall and for lower asset corporations in 1994 constant dollars. For higher asset corporations, the recommended taxes decreased 5 percent from $1,245 million in 1988 to $1,186 million in 1994. The greatest decrease of 14 percent occurred for corporations with assets of $250 million and over ($877 million in 1988 to $758 million in 1994). Table V.2: Net Additional Taxes Recommended Presented in Constant Dollars for Large Corporations, Fiscal Years 1988 Through 1994 $50 mil. $100 mil. < $100 mil. < $250 mil. In 1994 constant dollars, net additional taxes recommended slightly decreased 3 to 4 percent overall and for higher and lower asset corporations in a comparison of 1988 to 1994 results. Of the four asset classes, only the audits of corporations with assets of $100 million to less than $250 million generated more net recommended taxes—about 15 percent (from $323 million in 1988 to $372 million in 1994). Table V.3: Additional Taxes Recommended per Return, Presented in Constant Dollars for Large Corporations, Fiscal Years 1988 Through 1994 $50 mil. < $100 mil. $100 mil. < $250 mil. In 1994 constant dollars, a comparison of 1988 to 1994 showed that the amount of additional taxes recommended per return has decreased overall and for lower asset size corporations. For higher asset size corporations, this ratio increased 13 percent (about $355,000 in 1988 to $402,000 in 1994). However, corporations with assets of $10 million to less than $50 million drove the overall change with a decrease of 21 percent of recommended taxes per return (from about $105,000 in 1988 to about $82,000 in 1994). Table V.4: Additional Taxes Recommended per Direct Audit Hour, Presented in Constant Dollars for Large Corporations, Fiscal years 1988 Through 1994 $50 mil. < $100 mil. Taxes recommended per audit hour $100 mil. < $250 mil. A decrease occurred overall and for each asset class except for corporations with assets of $250 million and over. The overall decrease from $1,710 in 1988 to $1,313 in 1994 was 23 percent. However, corporations with assets of $10 million to less than $50 million experienced the greatest decrease of 43 percent ($1,455 in 1988 to $858 in 1994). Cecelia M. Ball, Project Manager Royce L. Baker, Issue Area Coordinator Thomas N. Bloom, Computer Specialist The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the results of the Internal Revenue Service's (IRS) efforts to audit the tax returns of about 45,000 large corporations, focusing on: (1) audit trends for fiscal years 1988 through 1994; (2) the portion of taxes recommended by agents that were eventually assessed; and (3) the profiles of audited large corporations compared with those of nonaudited corporations. GAO found that: (1) for every dollar invested in large corporation audits, IRS ultimately assessed $15 in additional taxes for the years 1988 through 1994; (2) IRS invested more hours in directly auditing large corporations but recommended less additional tax per hour invested in 1994 compared to 1988; (3) in 1994, large corporations appealed 66 percent of the additional taxes that IRS recommended in its audits; (4) between 1988 and 1994, IRS assessed 27 percent of the recommended additional taxes either after agreement or resolution in appeals; (5) IRS believed that the assessment rate was not an accurate measure of audit effectiveness, since various factors outside the audit could lower the rate; (6) the assessment rates ranged from 20 to 38 percent for four asset classes and from 0 to 103 percent by IRS district, but the reasons for the disparities were unclear; (7) the rates for audits closing without any adjustments has rapidly increased, raising questions about how IRS selects returns for audits; and (8) audited corporations tended to report higher average incomes, tax liabilities, and other tax amounts than nonaudited corporations.
The Army has two reserve components, the Army National Guard and the Army Reserve. Both reserve components are composed primarily of citizen soldiers who balance the demands of civilian careers with military service on a part-time basis. During the Cold War, it was expected that the reserve forces would be a strategic reserve to supplement active forces in the event of extended conflict. However, since the mid-1990s, the reserves have been continuously mobilized to support operations worldwide, including those in Bosnia and Kosovo as well as operations in Afghanistan and Iraq. In today’s strategic environment, the Army’s reserve components have taken on a variety of different overseas missions as well as traditional and emerging domestic missions. The Army Reserve and the Army National Guard are part of the total Army, which also includes the active component. The Army Reserve is a federal force that is organized primarily to supply specialized combat support and combat service support skills to combat forces. The Army National Guard is composed of both combat forces and units that supply support skills. The Army National Guard, when mobilized for a federal mission, is under the command and control of the President. When not mobilized for a federal mission, Army National Guard units act under the control of the governors for state missions, typically responding to natural disasters and more recently protecting state assets from terrorist attacks. Individual training is a building block of the Army training process. It includes basic military training as well as occupational specialty training. Acquiring advanced individual skills enables a soldier to move into a unit, but acquisition of such skills does not necessarily equate with operational preparedness. It must be integrated with unit training in a group situation, which is referred to as collective training, to achieve operational objectives. Traditionally, the Army used a mobilize-train-deploy strategy to prepare its reserve component units to act as a strategic reserve that was available to augment active forces during a crisis. Figure 1 shows that the traditional reserve component strategy called for a constant level of training until a unit was mobilized and underwent extensive post-mobilization training to prepare for deployment. Under the traditional training strategy, all training was focused on a unit’s primary missions and units were to be deployed to perform their primary missions. As reserve component requirements increased in recent years, the Army began to move away from its traditional strategy and began adopting a train-mobilize-deploy strategy that prepares reserve component forces to serve as an operational reserve, which regularly supports deployment requirements. Figure 2 shows that the Army’s current reserve component training strategy is based on a 5-year cycle during which training is increased to build capabilities. The current train-mobilize-deploy strategy is designed to train individuals and units to a prescribed level of readiness prior to mobilization in order to limit post-mobilization training. Several variables can affect the numbers of forces that are available to support ongoing operations, including the size and structure of active and reserve component forces and policies concerning the length of deployments and reserve component mobilizations. On January 19, 2007, the Secretary of Defense issued a memorandum that changed DOD’s mobilization and deployment policies. It eliminated a previous policy that had limited involuntary mobilizations to 24 cumulative months and thus made virtually all reserve component personnel available on an indefinite recurrent basis. However, the policy also limited involuntary mobilizations to 12 months at a time. It also established a reserve component unit planning objective of 1 year mobilized to 5 years demobilized, and created a requirement for mobilizations, including training and deployment, to be managed on a unit basis. In January 2008, the Commission on the National Guard and Reserves recommended that the Secretary of Defense ensure that training institutions and facilities were resourced to meet the needs of the total force. In particular, it recommended that institutions meet the current training needs of the reserve component personnel and that each service reassess the number of training and administrative days the reserve component units and members need prior to activation. The Commission further recommended that the services fund and implement policies to increase pre-mobilization training and focus training on mission requirements. The commission also stated that training equipment should be sufficient to give service members regular access to modern warfighting equipment so that they could train, develop, and maintain proficiency on the same types of equipment that they would use when deployed. In February 2009, the Army Audit Agency reported that Army National Guard and Army Reserve units often were unable to complete pre- mobilization training tasks because they were not able to stabilize staffing levels and obtain equipment needed for training. They further reported that units did not execute training requirements in the most efficient manner. The Army is able to effectively execute the portion of its reserve component training strategy that calls for training units on their assigned missions, but faces challenges in effectively executing the portion of the strategy that calls for training units on their primary missions. The Army’s new training strategy is based on a five-year cycle that mirrors the former strategy in the early years of the cycle, but calls for alterations to the type and amounts of training conducted in the later years of the cycle. Specifically, in the early years of the cycle, units conduct 39 days of training that is focused on their primary missions just as they did under the former strategy. However, under the new strategy, after a unit is notified—generally in the middle to later stages of the training cycle (1 or 2 years prior to mobilization)— that it will be deploying for an operational mission, all the unit’s training becomes focused on that assigned mission, and training increases, up to 109 days in the year prior to mobilization. The Army’s Field Manual 7-0 Training for Full Spectrum Operations defines effective training as that which builds proficiency, teamwork, confidence, cohesiveness, and allows organizations to achieve their training objectives. The manual also specifies that organizations should train the way they intend to operate and be efficient by making the best of use of training resources, including training time. The Army’s reserve component training strategy contains a number of assumptions related to effective and efficient training. First, the strategy explicitly assumes that the amount of training conducted after mobilization can be reduced because of the increased training that is conducted prior to mobilization. Second, it implicitly assumes that the training conducted in the early years of the cycle lays a foundation that can be built upon throughout the later stages of the cycle. Third, it implicitly assumes that units will have the necessary time, personnel, equipment, and support to conduct effective training on both individual and unit tasks throughout the training cycle. The Army currently prioritizes its available training resources and time to support units that are preparing to deploy for ongoing operations. As a result, unit training for assigned missions, which is conducted in the later stages of the Army’s 5-year training cycle, is generally effective. Table 1 shows the typical status of reserve component units with respect to available training time, personnel, equipment, and training support throughout the 5-year cycle. The table shows that during the later stages of the cycle, units have the necessary training time, and necessary personnel, equipment, and support to support effective unit training. According to the reserve component training strategy, units have their yearly training increased during the 2 years prior to mobilization—up to 45 days, and up to 109 days, 1 year prior to mobilization. Because this increased pre-mobilization training is focused on the same assigned missions as the units’ post-mobilization training, the Army has been able to reduce the amount of post-mobilization training. Furthermore, in the later stages of the cycle, mission requirements are generally stabilized and the Army has traditionally stabilized unit personnel levels through the use of “Stop Loss” policies, which prevent personnel from leaving units. This stabilization allows the Army to conduct effective unit training that builds teamwork and unit cohesion. Units train the way they intend to operate— with the people who will deploy and on the missions they will perform. Under DOD’s Stop Loss policy Army reserve component units were subject to stop loss 90 days prior to mobilization. However, the Army recently announced a comprehensive plan to eliminate stop loss, beginning in August 2009, while retaining the authority for future use under extraordinary circumstances. Personnel from units in our sample indicated that they preferred to conduct unit training later in the training cycle. They indicated that their units generally had increased personnel levels during the later stages of the cycle. Of the 22 units in our non-probability sample, 21 received additional personnel from other units to help them achieve the units’ required deployment strengths. The brigade combat teams that we met with also received significant numbers of personnel from other units to help prepare them for their deployments in 2009. In each of these cases, the units received the additional personnel during the later part of the training cycle—in the year prior to the units’ mobilizations or at the mobilization station. Personnel from the units we sampled also noted that equipment is more available in the later stages of the training cycle when units also receive additional training support including personnel who support unit training events by acting as observers, controllers, and trainers. Furthermore, the Army has found that the later stages of the cycle are the optimum times to conduct unit training. In the Army’s 2009 Posture Statement, the Army indicated that an extended training period close to, or contiguous with, mobilization station arrival, enabled commanders to attain the highest levels of readiness and unit capability. Additionally, two February 2009, Army Audit Agency reports on Army National Guard and Army Reserve pre-mobilization training found that the best practice for completing required pre-mobilization training tasks was to conduct the majority of those tasks immediately prior to mobilization when mission specific equipment is more available. Finally, in a May 2009 letter to the Secretary of Defense, the Adjutants General Association of the United States stated that training late in the cycle just prior to mobilization is often required to enhance soldier readiness. As noted previously in table 1, the Army is unable to set the conditions required for effective unit training during the early years of the cycle, when units are focused on primary mission training. Training time, personnel, equipment, and training support are key enablers of effective unit training, but the Army faces challenges that are associated with each of these enablers during the early stages of the training cycle. In addition, our current and prior reviews have found that units that are not scheduled to deploy receive lower priorities for resources and training support. Therefore, a number of reasons make it unlikely that units would be adequately prepared to deploy and conduct their primary missions following a reduced post-mobilization training period such as the one called for under the current strategy. First, units are receiving the same level of primary mission training as they were under the former strategy that called for more lengthy post- mobilization training periods. Second, annual reserve component attrition rates that typically approach 20 percent limit the effectiveness of unit training that is conducted to build teamwork and unit cohesion. Because the training strategy calls for a 5-year training cycle and attrition occurs each year, unit training that is conducted early in the cycle and designed to build teamwork and unit cohesion will become less beneficial with each passing year, as team members depart the unit. DOD reports indicate that attrition rates for the Army National Guard and Army Reserve have ranged from 17 percent to 22 percent from fiscal years 2003 through 2007. Because of these attrition rates, a significant percentage of the unit personnel who train on the units’ primary missions during the early stages of the 5-year cycle will not be in the unit at the end of the cycle when the unit is available to deploy. Third, units that are training for primary missions during the early stages of the cycle also experience personnel and equipment shortages, often because they are tasked to give up personnel and equipment to support deploying units. Personnel shortages result from a variety of reasons. Some personnel are not available for training because they are recovering from injuries or illnesses, while others are unavailable because of pending disciplinary actions. In addition, many soldiers have not met individual training requirements. According to the Army’s 2009 Posture Statement, the Army National Guard had 67,623 soldiers who were non-deployable in fiscal year 2008 because of incomplete initial entry training, medical, or other issues. For the same period, the Army Reserve had 36,974 soldiers who were non-deployable for similar reasons. These personnel shortages can directly impact the level of unit training that a unit is able to achieve prior to mobilization. In addition, equipment and support issues are also a concern early in the training cycle when units are training for their primary missions. In his March 2009 statement before the Senate Armed Services, Subcommittee on Personnel, the Director of the Army National Guard stated that the lack of equipment availability for training remains an issue. Further, the 2008 Army Reserve Posture Statement noted that the Army Reserve was forced to expend significant resources to move equipment between units and training locations to address shortages. Units in our sample also experienced equipment challenges during the early stages of the training cycle when they were training for their primary missions. Specifically, 12 of the 22 units in our sample faced equipment shortages that impacted their ability to train early in the cycle. Furthermore, training support is limited during the early years of the cycle. For example, the Army’s active component does not provide observers, controllers, and trainers to reserve component units to support their primary mission training, which is conducted early in the cycle. While DOD’s 12-month mobilization policy has not hindered the Army’s overall ability to train its reserve component forces and has reduced the length of deployments, it has not fully achieved its intended purpose of reducing stress on the force by providing predictability. According to testimony by the Secretary of Defense, the intended purpose of DOD’s mobilization policy was to reduce stress on the force by, in part, improving predictability. While the policy has led to shorter deployments, it has also caused units to mobilize and deploy more frequently, and units are also spending more time away from home in training when not mobilized. The 12-month mobilization policy has significantly reduced the length of deployments for the Army’s reserve component forces. Because units must spend part of their mobilization periods training for their assigned missions, they are actually deployed for only part of the time that they are mobilized. Under the previous mobilization policy, reserve component mobilizations were limited to 24 cumulative months and many reserve component units were deploying to Iraq or Afghanistan for 12 to 15 months. Under the current policy, which limits mobilizations to 12 months, deployments are averaging 9 to 10 months. Because the demand for reserve component forces has remained high and reserve component force levels have remained fairly stable, the 12-month mobilization policy, which has resulted in shorter deployments, has also resulted in more frequent deployments. Figure 3 illustrates the relationship between the length of deployments and the number of deployments when requirements and force structure are steady. It shows that 12-month deployments, which were typical under the previous policy, result in 3 deployments over a 36-month period. However, 9-month deployments, under the current policy, require 4 deployments to support the same requirements over a 36-month period. As previously noted, the Army’s reserve component strategy calls for reserve component units to have 4 years of training between deployments, but the 12-month mobilization policy, with its associated shorter deployments and more frequent mobilizations, has led to situations where units do not have 4 years available to conduct training. Demands for certain occupational specialties have remained particularly high. Army leadership recently testified that reserve component soldiers are experiencing less than 3 years between deployments, and personnel in some high demand units, such as civil affairs units, are receiving as little 13 months between deployments. For example, personnel from one of the units in our sample, an aviation battalion, experienced frequent deployments. Personnel from the battalion returned from deployment in 2008 and were notified that the unit will be mobilized again in 2011. As previously noted, under the Army’s reserve component strategy, unit training requirements build from 39 days in the first 2 years of the training cycle to as high as 109 days in the year prior to mobilization. However, the 12-month mobilization policy is leading to more frequent deployments, and units are mobilizing and deploying after 3 years at home rather than after 4. Because units are supposed to receive initial notification of their assigned missions two years prior to mobilization, the extended assigned mission training that is scheduled to occur after notification is still maintained under the compressed schedule, but the 39 days of primary mission training that is scheduled to be conducted in the second year of the training cycle, just prior to notification, is often eliminated. Therefore, since the extended training periods are maintained and the shorter training periods are eliminated, units are required to spend a higher proportion of their “at home” time conducting training. As part of its mobilization policy, DOD has established a goal that calls for reserve component forces to be mobilized for 1 year and demobilized for 5 years. However, the Army’s reserve component forces are not meeting this goal because of high operational requirements, stable force structure, and the 12-month mobilization policy that is causing more frequent deployments. When the Secretary of Defense testified that the mobilization policy was intended to reduce stress on the force by, in part, improving predictability in the mobilization and deployment process, he also noted that the department is not achieving its goal of 1 year mobilized to 5 years demobilized. Earlier, in September 2007, the Defense Science Board evaluated DOD’s mobilization policy and concluded that the goal of 1 year mobilized and 5 years not mobilized could not be achieved given the level of operational demand and the end-strength increases that had been planned. Thus, for the foreseeable future, DOD’s goal will be difficult to achieve because operational demands for reserve component forces are expected to remain high and force structure levels are expected to remain relatively constant. Furthermore, the Army does not expect to reach the goal of 1 year mobilized and 5 years not mobilized in the near future. In its 2009 Posture Statement, the Army indicated that it expected to progress to 1 year mobilized to 4 years demobilized by 2011 due, in part, to the drawdown in Iraq. However, the statement does not address the impact that increased operations in Afghanistan may have on the projected progress. Leaders and soldiers in one of the larger units we contacted said that the 12-month mobilization policy, which has led to more frequent deployments and training periods, has actually increased stress and decreased predictability. Specifically, they stated that they would prefer to be away from home for a single longer period of time rather than many shorter periods of time. However, in our other readiness work, we have found that the Air Force has developed an alternative approach to provide better predictability for its deploying active and reserve component personnel. The Air Force deployment model groups occupational specialties into 5 different “tempo bands” based on ongoing operational requirements. Personnel in the first band should expect to be deployed about the same length of time as they are home between deployments. Personnel in bands two, three, four, and five can expect to respectively be home two, three, four, or five times longer than they are deployed. The Air Force expects this model to increase predictability for its forces. In accordance with DOD Directive 1200.17, which directs the Secretaries of the military departments to ensure that facilities and training areas are available to support reserve component training requirements, reserve component forces are generally receiving the access to training facilities that is necessary to prepare them for their assigned missions. However, the Army’s training facilities lack the capacity necessary to prepare all of the Army’s forces for the full range of individual and unit training requirements, including those associated with primary as well as assigned missions. In addressing its capacity shortages, the Army has given priority access to personnel and units that have established mobilization dates or assigned missions. As a result, active and reserve component forces without assigned missions often experience delays in gaining access to training needed to prepare them for their primary missions. While the Army is exploring or has several initiatives under way to address training constraints, it has not identified the total requirements associated with its reserve component training strategy or the training capacity necessary to support the strategy. DOD Directive 1200.17 directs the Secretaries of the Military Departments to ensure facilities and training areas are available to support reserve component training requirements. It also directs the Secretaries to allocate resources where required to support a “train-mobilize-deploy” construct. As previously discussed, reserve component forces undergo individual training as well as collective (unit) training at various times in their training cycles in order to prepare them for their primary and assigned missions. Individual training is typically conducted at military schools or other specialized training sites while collective training occurs at larger training centers, such as the Combat Training Centers, and mobilization sites where units complete their final deployment preparations. Once units are assigned missions in support of ongoing operations, they are granted necessary access to training facilities. According to officials from the Army’s Training and Doctrine Command, missions and mobilization dates are two key factors that drive individual training opportunities and access to training facilities. U.S. Forces Command officials also said that priority access to training facilities is based on units’ mobilization and latest arrival in theater dates, rather than their status as part of the active or reserve component. Based on information from the units we contacted, we found that units generally had access to training facilities once they were assigned missions. Personnel from the units in our sample and the brigade combat teams we met with reported that they had been granted priority access to individual and collective training once their units were assigned missions. Specifically, in preparing for their most recent missions, 23 of the 24 units reported that they did not have access issues involving collective training facilities and 22 units reported that they did not have access issues involving individual training facilities. Officials from one of the units that reported access issues explained that this was because their soldiers did not receive necessary orders until a few days before they were mobilized. Officials from one of the other units explained that the access issues were because of the fact that the unit was under tight time constraints because it was part of the 2007 surge force that deployed to Iraq. Officials from the third unit that reported access issues explained that it trained using a motor pool to simulate a detention facility because it could not access a more appropriate training facility. Capacity constraints involving personnel, equipment, and infrastructure, limit training opportunities for some forces at individual and collective training facilities. In some cases, the Army is exploring or has ongoing initiatives that are intended to help address constraints on individual and collective training. Because deploying forces have higher priority and existing training facilities do not have sufficient capacity to accommodate all training needs, reserve component forces that have not been assigned missions often experience delays in gaining access to individual training needed to prepare them for their primary missions. While both the Army Reserve and Army National Guard are limited in their ability to fully train all soldiers on individual tasks within desired time frames, the effect of these limitations is particularly significant for the Army National Guard. The Army National Guard’s individual training goal is to have no more than 15 percent of its soldiers awaiting individual training at any given time. However, table 3 shows that the Army National Guard has not been able to achieve this goal since 2001, as a result of the individual training capacity limitations. Although the percentage of Army National Guard soldiers awaiting individual training declined to 17 percent in 2004 and 2005, it has remained at or above 22 percent since that time. Furthermore, Army National Guard training officials stated that they do not expect the number of soldiers awaiting training to change their specialty to decrease from the March 2009 level. In March 2009, 80,000 Army National Guard soldiers were awaiting various types of individual training, of whom 35,000 were awaiting training to change their specialty, such as from aviation to infantry. In both the active and reserve components, incoming recruits often prefer to sign contracts to begin basic training in the summer. This Army-wide preference exacerbates capacity constraints at individual training facilities during the summer months. While the number of soldiers awaiting training decreases over the summer months because most soldiers begin training at that time, Army officials said backlog could be reduced further if the Army fully accounted for this summer surge during its planning process, but the Army plans as if individual training requirements are evenly distributed across the fiscal year. The Army National Guard expects to reduce the number of soldiers awaiting basic training from 30,000 to 10,000 by September 30, 2009, but this number could be reduced even further if capacity constraints were addressed. While capacity is not an issue during the fall and winter months, Army officials expect the number of soldiers awaiting training to increase during those months because incoming recruits generally do not want to begin training during those months. Army officials said they are exploring ways to even out the training demand such as offering bonuses for soldiers to enlist and attend basic training outside of the summer months. Additionally, the Army formed an integrated process team specifically to develop options for mitigating the summer surge, including options to expand capacity. At the time of our review, the team’s work was ongoing, and it was too soon to know what, if any, actions would be taken as a result of its efforts. The delays in individual training opportunities that are caused by capacity constraints are distributed across the Army in both the active and reserve components. The Army has a review process that compares Army-wide individual training requirements to the training capacity at the Army’s active training facilities and allocates training quotas to the active and reserve components. The 2008 data from the process is depicted in table 4 and shows that the active and reserve components have approximately the same level of unmet training requirements at Army Training and Doctrine Command schools. Capacity constraints at collective training facilities such as the Army’s combat training centers and mobilization stations have limited training opportunities for both active and reserve component units. As we have previously reported, the Army’s strategy requires that all brigade combat teams be trained at the combat training centers prior to deployment. Because the combat training centers do not have adequate capacity, training opportunities are now limited to only those active and reserve brigade combat teams that have been assigned missions requiring them to control battle-space. As a result, most active and reserve components units, including brigade combat teams that are assigned detainee operations or convoy security missions, do not train at the combat training centers. These units conduct training at other locations such as the Army’s mobilization stations. In the past, capacity constraints have also limited reserve component access to facilities at certain mobilization stations. For example, officials from First Army, which is responsible for training mobilized reserve component units, stated that facilities have not always been accessible at sites such as Ft. Bragg and Ft. Dix because they were being used by active component forces. Because of this, First Army is realigning its resources and will no longer be using the constrained facilities to train mobilized reserve component forces. First Army officials expect the realignment to increase training capacity because its resources will be concentrated at mobilization stations where it has greater control over scheduling. However, DOD’s 2008 Sustainable Ranges Report identified shortfalls at a number of major collective training facilities, including the mobilization stations that First Army plans to continue to use. These shortfalls involve land and airspace, ranges, infrastructure and feedback/scoring systems, as well as a number of other resources. Four of the 24 units we contacted identified shortfalls at the mobilization stations where they conducted collective training in preparation for their most recent missions. Two of these units stated that their mobilization stations did not have adequate infrastructure, citing shortfalls in maintenance and hangar facilities respectively. The other two units stated that their mobilization stations were in geographic locations that hindered training because of the terrain, explaining that Mississippi and western Oklahoma did not realistically replicate conditions in Afghanistan and Iraq respectively. Army Reserve officials told us that similar shortfalls characterize many of the collective training facilities owned by the reserve components because the Army employed tiered resourcing for several years, which relegated reserve component requirements to a lower priority for funding than active component requirements. These facilities are commonly used by reserve component units to execute collective training prior to mobilization. Initiatives to Help Address Training Capacity Constraints The Army has several initiatives under way to help address individual and collective training capacity constraints. For example: The Army has developed a database, which is intended to account for both active and reserve component individual training facilities under a “One Army School” system. However, the Army has not accounted for reserve component individual training facilities when filling training requirements, and in its 2007 Training Capacity Assessment, the Army’s Training and Doctrine Command found that a significant reserve component infrastructure was available to meet individual training requirements. The Army is attempting to address individual training capacity constraints through the use of mobile training teams. These mobile training teams contain transportable training assets—facilities, equipment, and personnel—which deploy to units’ home stations to provide individual training. Mobile training teams are currently being used to provide classes that are in high demand, such as professional military education, foreign language, and cultural awareness. These mobile training teams partially relieve capacity constraints resulting from limited infrastructure at training facilities. The Army National Guard has established an Exportable Combat Training Center program, to address facility, personnel, and equipment limitations that impact pre-mobilization collective training for Army National Guard units. The program enhances training by providing instrumentation to collect and record individual and unit performance, exercise control personnel, opposition forces, and civilians on the battlefield; program officials also coordinate the use of appropriate facilities. Exportable Combat Training Center events are intended to serve as the culminating collective training event prior to a unit’s mobilization and are designed to validate training proficiency up to the company level. The Army National Guard conducted four Exportable Combat Training Center program training events from 2005 through 2008, and it intends to conduct 5 training events from 2009 through 2010. The Army Reserve has a concept plan for a Combat Support Training Center to address capability constraints in combat support and combat service support collective training 1 to 2 years prior to a unit’s mobilization. This concept has been approved at the Department of the Army level but is currently unfunded. The Combat Support Training Center would leverage existing active and reserve component combat support and combat services support expertise and thus not have to compete with active component forces capabilities. The Combat Support Training Center program is expected to provide instrumentation, an operations group, opposition forces, civilians on the battlefield, interpreters, media teams, and realistic training environments, similar to Combat Training Centers such as the National Training Center at Ft. Irwin, California. The first Combat Support Training Center event is scheduled to occur in July 2009 at Fort McCoy, Wisconsin. While the Army has a number of initiatives intended to relieve training capacity constraints, it has not identified the total personnel, equipment, and facility resources needed to support its reserve component training strategy. As previously discussed, DOD Directive 1200.17 directs the Secretaries of the Military Departments to ensure facilities and training areas are available to support reserve component training requirements. It also directs the Secretaries to allocate resources where required to support a “train-mobilize-deploy” construct. In November 2008, the Secretary of Defense directed the Secretaries of the Military Departments to review the capacity of their training institutions to determine if they are properly resourced to prepare all military members to meet mission requirements. The Army has ongoing efforts to address this tasking, but these efforts do not fully address all individual and collective training requirements. In June 2009, the Army’s Training and Doctrine Command is scheduled to produce an update to its 2007 Total Army Capacity Assessment of individual training requirements. However, both the 2007 and 2009 assessments focus exclusively on training infrastructure, and neither assessment addresses personnel and equipment constraints that have limited training in the past. Further, the Army’s efforts to identify collective training requirements are affected by inaccurate assumptions regarding the use of ranges. Specifically, the Army Range Requirements Model, which is used to determine Army range requirements, calculates requirements based on an assumption that reserve component forces will be mobilized for 1 of 6 years. Since reserve component forces are being mobilized more frequently—about 1 of 3 years, according to Army officials—the model understates actual training requirements. The model also understates active component range requirements since it calculates requirements based on planned operational tempos rather than the actual higher tempos that are occurring to support ongoing operations. Because the model understates current requirements, it does not accurately project the full magnitude of capacity constraints at the Army’s ranges. In recent years, reserve component units have successfully deployed for a wide range of assigned missions, and the training and preparation for these assigned missions, which is conducted in the later stages of the Army’s 5-year cycle, was generally effective. However, collective training for primary missions, conducted in the early stages of the 5-year cycle, generally is not optimized because of various challenges. Such challenges include limited training time, changing personnel because of attrition, personnel and equipment shortages, and limited training support. Given that ongoing operations are expected to continue for some time, it is imperative that the Army has a strategy that is executable and provides for efficient use of training resources. Otherwise, units may continue to use limited training time and resources to build teams that are unlikely to deploy together and to train units for collective tasks that they may not perform. In light of the continued high demand for reserve forces and the Army’s existing force structure levels, DOD’s 12-month mobilization policy is likely to continue to result in more frequent and less predictable deployment and training periods, particularly for personnel in high demand occupational specialties, raising questions about the need to reevaluate the policy and consider alternatives. Furthermore, without complete information concerning the personnel, equipment, and facilities support that is necessary to execute reserve component training strategy, the Army will not be able to identify total requirements for its strategy, establish priorities and related resource needs, and be assured that current initiatives are addressing priority needs. To improve the Army’s training strategy and DOD’s mobilization policy for Army reserve component personnel, we recommend that the Secretary of Defense take the following three actions: To better ensure the Army has an executable strategy for effectively training its reserve component forces, we recommend the Secretary of Defense direct the Secretary of the Army to reevaluate and adjust its reserve component training strategy to fully account for the factors that limit the effectiveness of unit training for primary missions in the early years of the 5-year cycle. Elements that should be considered in re- evaluating the training strategy should include: Whether the total training days allotted for reserve component training are adequate to train units for both primary and assigned missions, which may require significantly different resources and skill. Whether consolidating collective training later in the training cycle, as opposed to spreading it through the cycle, would enhance the effectiveness of the training and increase predictability. To better ensure DOD’s mobilization policy is having the intended effect of providing reserve component personnel with predictable training, mobilization, and deployment schedules while also improving DOD’s ability to effectively train and employ its reserve component forces, we are recommending that the Secretary of Defense reevaluate DOD’s mobilization policy for Army reserve component personnel and consider whether a more flexible policy that allows greater variations in the length of mobilizations or which establishes deployment goals based on occupational specialty or unit type would better meet DOD’s goals to reduce stress on the force and improve predictability for personnel. To better ensure that the Army has a reserve component training strategy that it is able to execute, we recommend that the Secretary of Defense direct the Secretary of the Army to determine the range of resources and support that are necessary to fully implement the strategy. Elements that should be accounted for include: the personnel, equipment, and facilities required to fully support individual training requirements; the range space required to fully support individual and collective training requirements; and the full support costs associated with the Army reserve component training strategy— including personnel, equipment, and facilities. In written comments on a draft of this report, DOD concurred or partially concurred with all of our recommendations. Specifically, DOD concurred with the element of our first recommendation that calls for the Secretary of Defense to direct the Secretary of the Army to consider, when reevaluating the Army’s reserve component training strategy, whether the total training days allotted for reserve component training are adequate to train units for both primary and assigned missions. DOD noted that reserve component units do not always have sufficient time in their baseline training year to prepare for both a primary and assigned mission when those missions are substantially different. DOD also stated that today’s global demand for Army forces prevents reserve component units from sustaining their 5-year training cycle, since the Army must continuously balance its strategic depth against available resources to meet current operational requirements. DOD, however, did not state that it would take any action. We agree with DOD’s comments, and in fact, these comments reflect the same conditions that led us to conclude that current operational realities necessitate a reevaluation of the Army’s reserve component training strategy, including the adequacy of training time allotted for reserve component training. Therefore, we continue to believe our recommendation has merit. DOD partially concurred with the second element of our first recommendation that the department, in reevaluating its training strategy, consider whether consolidating collective training later in the training cycle, as opposed to spreading it through the cycle, would enhance the effectiveness of the training and increase predictability. In comments, DOD noted that concentrating training later in the cycle compounds the existing resource-constrained environment and accentuates competition for limited training resources, facilities, equipment, and ranges. DOD, however, did not state that it plans to take any specific action. As noted in our report, the Army faces challenges associated with training time, personnel, equipment, and training support during the early stages of the training cycle and is, therefore, unable to set the conditions required for effective unit training during the early years of the cycle. Further, units we sampled indicated they preferred to conduct collective training later in the training cycle when personnel and equipment levels are more stable. The Army has also acknowledged, in its 2009 Posture Statement, that an extended training period close to or contiguous with arriving at the mobilization station allowed commanders to achieve the highest levels of readiness and unit capability. We continue to believe that collective training should be conducted when training enablers such as personnel and equipment are present to ensure the training is most effective and that the Army should reevaluate its current approach. DOD partially concurred with our second recommendation that the Secretary of Defense reevaluate DOD’s mobilization policy for Army reserve component personnel and consider whether a more flexible policy, which allows greater variations in the length of mobilization or which establishes deployment goals based on occupational specialty or unit type, would better meet DOD’s goals to reduce stress on the force and improve predictability for personnel. In DOD’s response, the department noted the Secretary of Defense will continue to evaluate those circumstances that warrant changes or exceptions to the mobilization policy but commented that the 1-year mobilization has reduced stress on service members, their families and employers. DOD also acknowledged the challenge associated with implementing a 5-year training and preparation cycle and identified several innovations designed to enhance predictability and reduce stress on reserve component soldiers and units including the Regional Training Centers developed by the Army Reserves to assist units in preparing for mobilization and the consolidation of its training support structure at six mobilization training centers to better support all deploying units. Our report acknowledges department initiatives to increase training capacity and support to units through initiatives like those pointed out by the department. However, we also note that in spite of these initiatives, DOD’s mobilization policy is not achieving the intended purpose of reducing stress on the force by providing predictability. For example, our report discusses how the 1-year mobilization, while limiting the amount of time reserve component soldiers and units are deployed, is resulting in more frequent deployments and is, therefore, not reducing stress on soldiers and units. We continue to believe the mobilization policy needs to be reevaluated to determine whether a more flexible approach that recognizes variances in deployment frequency based on occupational specialty and unit type would improve predictability. DOD partially concurred with our third recommendation that the Secretary of Defense direct the Secretary of the Army to determine the range of resources and support that are necessary to fully implement the Army’s strategy for training its reserve components. In comments, DOD noted that an all volunteer force trained to meet its persistent operational requirements will require sufficient resources in order to be trained and ready. To do so, DOD further noted, will require a holistic approach that leverages the consolidation of training locations in conjunction with the utilization of live, distributed learning, virtual, and constructive technologies to deliver more training to home station locations. DOD also stated the Army will need to prioritize the allocation of funds supporting training initiatives while embedding the costs to implement them in its Program Objective Memorandum. We agree that the Army’s various training initiatives, many of which are discussed in our report, should be prioritized and the costs associated with those initiatives should be reflected in the Army’s Program Objective Memorandum. However, we believe the Army must first determine the full range of resources and support required to implement its training strategy in order to establish priorities and resource needs in order to be assured that current initiatives are addressing priority needs. The full text of DOD’s written comments is reprinted in appendix II. We are sending copies of this report to other appropriate congressional committees and the Secretary of Defense. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512-9619 or pickups@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributors to this report are listed in appendix III. To determine the extent to which the Army is able to effectively implement its strategy for training Reserve Component forces, we reviewed documentation outlining the Army’s approach to training its reserve component forces such as Field Manual 7.0, Training for Full Spectrum Operations and Department of the Army Executive Order 150- 08, Reserve Component Deployment Expeditionary Force Pre-and Post- Mobilization Training Strategy. Additionally, we discussed the training strategy, factors that limit execution of the strategy, and initiatives under way to address any limiting factors with officials responsible for training including officials from the Department of the Army Training Directorate, U.S. Army Forces Command, the Army National Guard Readiness Center, First Army, the Army Training and Doctrine Command, and the U.S. Army Reserve Command. To determine the impact personnel levels have on training effectiveness, we obtained and reviewed data on attrition. To assess the reliability of these data, we reviewed documentation and interviewed officials and determined these data to be sufficiently reliable. To assess the extent to which mobilization and deployment laws, regulations, goals, and policies impact the Army’s ability to train and employ Reserve Component forces, we reviewed laws, regulations, goals, and policies that impact the way the Army trains and employs its reserve component forces such as relevant sections of Titles 10 and 32 of the U.S. Code and DOD’s January 2007 mobilization policy. Additionally, we interviewed Army officials from organizations such as U.S. Army Reserve Command, the National Guard Bureau, and U.S. Joint Forces Command to discuss the impact of mobilization and deployment documents. Lastly, we reviewed and analyzed data from units and various Army offices, including data showing trends in pre- and post-mobilization training time, to assess how mobilization and deployment laws, regulations, goals, and policies may be impacting reserve component units and personnel. To determine the extent to which access to military schools and skill training, facilities. and ranges affect the preparation of reserve component forces to support ongoing operations, we reviewed documentation such as DOD’s 2008 Sustainable Ranges Report, the 2007 Total Army Training Capacity Assessment, and outputs from DOD’s Structure Manning Decision Review. To determine how training requirements are prioritized, we also interviewed officials from the Army’s Training and Doctrine Command and the U.S. Army Forces Command. These commands schedule units and soldiers to attend individual and collective training. Further, we reviewed documentation and interviewed officials to determine initiatives that the Army has under way to address capacity constraints and to assess total training requirements. We also obtained and reviewed data on Army National Guard soldiers awaiting individual training. We assessed the reliability of these data by reviewing existing documentation and interviewing knowledgeable officials and found these data to be sufficiently reliable for our purposes. Lastly, we observed Training at the Army’s National Training Center at Fort Irwin, California, and the Army National Guard’s exportable training conducted at Camp Blanding, Florida. To inform all three of our objectives, we sent a list of questions to U.S. Central Command and to Northern Command and held a follow-on video teleconference to discuss in more detail Northern Command’s response to our questions. Additionally, we surveyed a non-probability sample of 22 Army National Guard or Army Reserve units and conducted follow-up interviews with officials from 15 of these units. While the results of our survey and discussions are not projectable to the entire reserve component, we chose units of different types and sizes for our sample. In addition, we chose the proportion of Army National Guard and Reserve units for our sample based on the proportion of mobilized forces from each of the components. Our surveys and interviews addressed a range of issues including: deployment and notification timelines; the timing and effectiveness of pre-deployment, post-deployment, and in-theater training; and access to training facilities, schoolhouses, and ranges. Additionally, we interviewed commanders and personnel from two Army National Guard brigade combat teams that were training at the National Training Center at Fort Irwin, California, and at Camp Blanding, Florida. Of the total of 24 units in our non-probability sample, 22 had returned from supporting on-going operations in Iraq, Afghanistan, or Kosovo, and 2 were preparing for deployment. We conducted this performance audit from September 2008 through June 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Michael Ferren, Assistant Director; Grace Coleman; Nicole Harms; Ron La Due Lake; Susan Tindall; Nate Tranquilli; and John Vallario made key contributions to this report.
The Army's strategy for training its reserve component calls for units to conduct training on the primary missions for which they were organized and designed as well as the missions units are assigned in support of ongoing operations. The training is to be conducted over a 5-year cycle with a focus on primary missions during the early years and assigned missions during the later years. In response to mandates, GAO assessed the extent to which (1) the Army is able to execute its strategy for training reserve component forces for their primary and assigned missions; (2) mobilization and deployment laws, regulations, goals, and policies impact the Army's ability to train and employ these forces; and (3) access to military schools and skill training facilities and ranges affects the preparation of reserve component forces. To address these objectives, GAO analyzed relevant training strategies and policies, laws, and data and surveyed 22 Army reserve component units returning from deployments in the past 12 months. The Army is able to execute the portion of its reserve component training strategy that calls for units to effectively train for their assigned missions in support of ongoing operations, but faces challenges in executing the portion of the strategy that calls for units to effectively train on primary missions. Unit training for assigned missions, which is conducted in the later years of the 5-year training cycle, is generally effective because the Army prioritizes its available resources to support units that are preparing to deploy for ongoing operations--units receive increased training time; mission requirements and personnel levels are stabilized; and personnel and equipment shortages are addressed while support is increased. Conversely, units training for their primary missions in the early years of the cycle receive less time to train and experience equipment and personnel shortages, which adversely affect teamwork and unit cohesion. Also, support for their training is limited. These challenges limit the effectiveness of primary mission training and could impact their ability to conduct their primary missions within the current strategy's time frames. While DOD's current 12-month mobilization policy has not hindered the Army's overall ability to train its reserve component forces and has reduced the length of deployments, it has not fully achieved its intended purpose of reducing stress on the force by providing predictability to soldiers. Because units must spend part of their mobilization periods in training, they are actually deploying for about 10 months under this 12-month mobilization policy, whereas they typically deployed for periods of 12 to 15 months under the previous policy. Under the current policy, the Army's reserve component forces are deploying more frequently and spending more time away from home in training when they are not mobilized. Moreover, unit leaders and personnel GAO interviewed said that the 12-month mobilization policy has decreased predictability and increased stress for individuals. GAO noted alternate approaches that can improve predictability. For example, the Air Force recently developed a deployment model categorizing five grouped occupational specialties based on operational requirements and length of time home between deployments. The model is intended to increase predictability for its forces and thus reduce their stress. Reserve component forces are generally receiving access to training facilities necessary to prepare them for their assigned missions, but the Army lacks capacity to prepare all of its forces for the full range of training requirements. In addressing capacity shortages, the Army has given priority to deploying units and personnel. As a result, active and reserve component forces without assigned missions often experience delays in accessing training for their primary missions. Although the Army is reviewing some aspects of its training capacity, it has not fully identified its training requirements and capacity and therefore will not have a sound basis for prioritizing available resources and cannot be assured that the initiatives it has under way will fully address gaps in its training capacity.
As the principal component of the National Airspace System, FAA’s air traffic control system must operate continuously—24 hours a day, 365 days a year. Under federal law, FAA has the primary responsibility for operating a common air traffic control system—a vast network of radars; automated data processing, navigation, and communications equipment; and air traffic control facilities. FAA meets this responsibility by providing such services as controlling takeoffs and landings and managing the flow of air traffic between airports. The users of FAA’s services include the military, other government users, private pilots, and commercial aircraft operators. Projects in FAA’s modernization program are primarily organized around seven functional areas—automation, communications, facilities, navigation and landing, surveillance, weather, and mission support. FAA expects to spend approximately $41 billion for its modernization program through 2004. Of this amount, Congress appropriated over $27 billion for fiscal years 1982 through 1999. The agency expects that approximately $13 billion will be provided for fiscal years 2000 through 2004. See figure 1 for an illustration of how FAA’s appropriation was divided among the seven functional areas. Figure 2 illustrates how FAA’s appropriation was divided by project status—completed projects, ongoing projects, canceled/restructured projects, and personnel-related expenses. Over the past 17 years, FAA’s modernization projects have experienced substantial cost overruns, lengthy schedule delays, and significant performance shortfalls. Because of the size, complexity, cost, and problem-plagued past of FAA’s modernization program, we have designated it a high-risk information technology investment since 1995. FAA has encountered difficulty in acquiring new systems to help achieve its goals of replacing the air traffic control system’s aging infrastructure and of meeting the projected increase in air traffic. In the 1980s and early 1990s, the agency did not follow the structured approach outlined in federal acquisition guidance. Even after the agency revised its approach in 1991—to address past shortcomings in the design and implementation of the approach—problems persisted with FAA’s air traffic control modernization program. In 1996, FAA began a new approach that emphasized, once again, the need for discipline in selecting, monitoring, and evaluating modernization projects. Despite this new approach, problems persist with FAA’s ability to effectively implement and manage its modernization program. We have identified a number of root causes that have contributed to modernization problems. These causes are related to the lack of a disciplined acquisition management approach. In the 1980s and early 1990s, we reported that problems with modernization projects occurred largely because FAA did not follow the guidance outlined in Office of Management and Budget Circular A-109, which is the principal guidance for acquiring major systems in the federal government. Circular A-109 calls for following a disciplined, five-phased approach to acquisition in order to minimize problems, such as cost increases and schedule delays. The five phases include (1) determining mission needs; (2) identifying and exploring alternative design concepts; (3) demonstrating alternative design concepts, including prototype testing, and evaluation; (4) initiating full-scale development and limited production, including independent testing; and (5) full production. Before moving from one phase to the next, the guidance calls for a key decision point, at which time agency heads are to evaluate the cost, schedule, and performance parameters of major projects. During these reviews, any management concerns about these parameters must be resolved before the acquisition is allowed to proceed. From the inception of the air traffic control modernization program until 1991, FAA did not follow Circular A-109 guidance. The agency believed that it could deliver and install new systems more quickly by combining Circular A-109 phases. For example, FAA merged the first three phases into one, under which the agency performed some prototype testing but ignored mission need and alternative analyses. However, FAA’s failure to follow Circular A-109 resulted in delays in many of the major systems in the modernization effort, most notably FAA’s centerpiece project, known as the Advanced Automation System (AAS). For example, FAA contracted for the production of a key component of this project before it had fully defined requirements for this component. Between 1983 and 1991, the lack of clarity and decisiveness in resolving requirements contributed to costs for the project increasing from $2.6 billion to $4.5 billion and the schedule slipping by 7 years. In February 1991, FAA issued revised guidance on major acquisitions, which put FAA policy in compliance with Circular A-109. Among the changes incorporated in this guidance was a requirement that new projects have a mission needs statement approved before being included in FAA’s budget. The guidance also required that alternatives be identified and evaluated and that operational testing be conducted and reviewed by an independent test group within FAA before production decisions were made. Moreover, FAA required that program managers submit a risk management plan, including measures to reduce risk, that FAA senior managers must approve before an acquisition could proceed to the next phase. Program managers were also required to develop acquisition program baselines (boundaries) for the most costly major acquisitions—usually those exceeding $150 million. These baseline documents were intended to promote stability and control costs by establishing quantified targets for key performance, cost, and schedule parameters that are critical to the success of the acquisition. Although FAA revised its acquisition policies in 1991 to instill more discipline into the acquisition management process, shortcomings in both design and implementation limited the process’s effectiveness. For example, the agency’s acquisition orders and guidance still did not require an analysis of current system performance as the starting point in the acquisition process. Instead, under the order, the starting point for the acquisition process was the mission needs statement. The order did not include any procedures or guidance for conducting a mission analysis before generating mission needs statements and made little mention of what types of data analyses were expected. As a result, the agency did not document that its current assets could no longer fulfil its needs and did not have any assurance that it was not wasting scarce resources in developing systems that were not the most appropriate and cost-effective. Similarly, senior acquisition officials did not thoroughly review project justifications to ensure that they were adequately supported. Other conditions that contributed to this lack of discipline in FAA’s acquisition process during this period included the frequent turnover of FAA senior managers. For example, between 1982 and 1995, the average tenure of the FAA Administrator was less than 2 years. This lack of continuity in personnel allowed the agency’s bureaucracy to focus on short-term improvements, avoid accountability, and resist fundamental changes. FAA continued to experience problems in the mid-1990s with its major acquisitions. For example, in 1994, FAA restructured AAS after the estimated cost to deploy the system had tripled, capabilities were significantly less than promised, and delays were expected to run nearly a decade. Additionally, the costs of the Voice Switching and Control System increased by 400 percent, from about $260 million to $1.4 billion, and the project’s planned date for implementation slipped by 6 years. Concerned about the continuing slow pace of the air traffic control modernization program—which led at times to FAA’s having to implement costly interim projects to sustain the ATC system—FAA sought from the Congress exemptions from many federal procurement rules. The agency asserted that these rules contributed to its acquisition problems and that exemptions would allow it to reduce the time and cost to deliver new products and services. In response, Congress exempted FAA in 1995 from many federal procurement rules, and the agency implemented its Acquisition Management System (AMS) on April 1, 1996. AMS is intended to provide high-level acquisition policy and guidance and to establish rigorous management practices for selecting and monitoring investments. To date, FAA has established a structure that is generally sound and could provide the discipline needed to help ensure that ATC modernization projects are implemented in a cost-effective manner. However, our past and recent work have shown that FAA has fallen short when it comes to implementing practices to build discipline into acquisition management. Specifically, our preliminary findings on FAA’s present approach indicate that the agency has not fully implemented an effective process for monitoring the cost, schedule, benefits, performance, and risk of its key projects throughout their life-cycle. Additionally, FAA lacks an evaluation process for assessing outcomes after projects have been developed to help improve the selection and monitoring of future projects. As we reported in 1995, exempting FAA from procurement rules could result in a somewhat more expeditious acquisition process, but those looking for dramatic, immediate changes in the modernization program would likely be disappointed. Our work showed then, and continues to show today, that the schedule, cost, and performance problems are caused by factors other than procurement rules. We have reported on several root causes of FAA’s past modernization problems. First, FAA lacks reliable cost estimating practices and cost accounting data, which leaves it at risk of making ill-informed decisions on critical and costly air traffic control systems and limits the ability of congressional decisionmakers to make trade-offs among FAA programs. Second, FAA attempted to modernize the National Airspace System without a complete systems architecture, or blueprint, to guide development and evolution and did not have the management structure needed to enforce its architecture once completed. The result has been unnecessarily higher spending to buy, integrate, and maintain hardware and software. Third, FAA processes for acquiring software for air traffic control systems are ad hoc, sometimes chaotic, and not repeatable across projects. As a result, FAA is at great risk of acquiring software that does not perform as intended and is not delivered on time and within budget. Finally, FAA’s organizational culture—the values, beliefs, attitudes, and expectations shared by an organization’s members that affect their behavior and the behavior of the whole organization—is an underlying cause of acquisition problems.When employees act in ways that do not reflect a strong commitment to mission focus, accountability, coordination, and adaptability, acquisitions can be impaired. We made recommendations in these reports to correct these root causes. FAA has taken a number of steps, in addition to implementing its Acquisition Management System, to overcome past problems with modernization efforts. However, most of these initiatives are just getting under way, and it is too soon to tell how successful they will be. Additionally, the agency has now completed work on about 90 modernization projects. In some cases, the costs were higher and the development longer than expected. The FAA Administrator took a notable step in November 1997 when she began an outreach effort to the aviation community to build consensus on and seek commitment to the future direction of the agency’s modernization program. As a result of this outreach effort, FAA and the aviation community agreed to (1) use an incremental approach to modernizing the National Airspace System, referred to as the “build a little, test a little” approach; (2) revise its blueprint for modernizing this system; and (3) deploy certain technologies earlier than FAA had planned because the aviation industry believed that these technologies could provide immediate benefits. These practices differ from those of the past in which FAA made unilateral decisions about air traffic control modernization and tried to deploy large, complex projects all at once, known as the “big bang” approach. Furthermore, FAA has actions under way to address the root causes we have identified in the past with its acquisitions. First, FAA has begun to develop a cost estimating process for its projects that will satisfy recognized estimating standards; draft guidance on reporting project cost estimates as ranges rather than precise point estimates; and develop a cost accounting system. Specifically, FAA plans to complete a cost estimating handbook, which should help improve the agency’s approach to estimating project costs. However, FAA has not established a firm date for issuing the handbook or for completing other tasks related to cost estimating. As for cost accounting, FAA had hoped to have a system operating by October 1998, but officials underestimated the complexity of developing the system and found that their implementation milestones were unrealistic. The agency now projects that the system will be fully operational by April 2001. Second, FAA has begun to develop a complete systems architecture for its modernization program and estimated in May 1998 that it would take 18 to 24 months to complete the development. Third, FAA has initiated efforts to improve its software acquisition processes. However, these efforts have not been implemented agencywide. In this connection, the agency hired a Chief Information Officer in February 1999. It is expected that FAA will establish a management structure similar to the department-level Chief Information Officers provision of the Clinger-Cohen Act of 1996, as we recommended. If so, the Chief Information Officer organization would be responsible for activities related to information technology, including software acquisition and systems architecture. Finally, FAA has outlined its overall structure for changing its organizational culture and described its ongoing actions to influence organizational culture. In this area, FAA has a pilot program under way for a new compensation program that it plans to implement agencywide. In recent years, FAA has claimed some success with delivering systems under its modernization program. While the agency has completed some modernization projects since 1982, many of the major projects, especially in the automation area, are years behind schedule. The agency has spent $6.3 billion of the over $27 billion appropriated between 1982 and 1999 on 93 completed projects. We note that although FAA completed several of its major projects, they generally cost more than anticipated and were delivered behind schedule. For example, FAA has declared the Display System Replacement a success because it deployed operational equipment to the first of 20 sites in December 1998. However, FAA’s 1983 modernization plan called for a similar system under the Advanced Automation System to be deployed in 1990. Likewise, FAA is now completing the deployment of other key systems first identified in its 1983 modernization plan. For example, FAA expects to complete the deployment later this year of two projects—Airport Surface Detection Equipment and Air Route Surveillance Radar—which were originally scheduled to be completed in 1990 and 1995, respectively. Of FAA’s key modernization projects, the agency has successfully deployed two large-scale projects over the past 17 years—both involving the HOST computer system. FAA completed the implementation of the HOST computer in 1988 and is currently replacing portions of this system. Both of these projects involve replacing hardware while utilizing existing system software. On a related issue, our work on the Year 2000 problem has shown that FAA has made tremendous progress over the past year, but much remains to be done to complete the validation and implementation of FAA’s mission-critical systems. In addition to these systems, the agency is concerned that system failures by external organizations, such as airports and foreign air traffic control systems could seriously affect FAA’s ability to provide aviation services. For example, we recently reported that 26 of the largest 50 airports in the United States are not planning to be Year 2000 compliant by June 30, 1999. Because of the risk of anticipated and unanticipated failures—whether from internal systems or from reliance on external partners and suppliers—a comprehensive business continuity and contingency plan is crucial to continuing core operations. FAA drafted its Year 2000 Business Continuity and Contingency Plan in December 1998 and is currently reviewing it. The agency plans to release four more iterations of this plan by the end of the year, with the next version due out in April 1999. We and others have expressed some concerns with FAA’s draft plans, which the agency is working to address. In conclusion Mr. Chairman, FAA has fallen short over the past two decades in implementing a disciplined management acquisition approach. While the agency has many of the elements in place to improve its management of the modernization program, implementation is key to the agency’s future success in this area. Among the positive steps that FAA has taken include actions to bring stability to the agency’s senior management ranks, as evidenced by the Administrator’s commitment to serve a full 5-year term. Moreover, she has filled many key management positions that had been vacant and has also begun to provide senior managers with incentives to work together toward agency goals. For the most part, FAA will need to sustain its commitment to fully implementing the various initiatives underway. As a first priority, it will be important for the agency to continue all of its efforts to help ensure that it can fulfill its mission when the year 2000 arrives. As for the longer-term, FAA’s continued collaboration with the aviation community will allow the agency to develop future plans for air traffic control modernization, including establishing realistic and clear goals and measures for tracking progress. Similarly, fully implementing solutions to the root causes of modernization problems and strengthening FAA’s control over modernization investments will better position the agency to consistently deliver modernization projects within established cost, schedule, and performance goals. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the Federal Aviation Administration's (FAA) Air Traffic Control Modernization Program, focusing on: (1) the causes of the problems that have plagued FAA's modernization program for nearly two decades; (2) recent agency efforts to overcome these problems; and (3) the readiness of FAA and others to meet year 2000 requirements. GAO noted that: (1) from the inception of the air traffic control modernization program to today, FAA has not consistently followed a disciplined management approach for acquiring new systems; (2) in the 1980s and early 1990s, FAA did not follow the phased approach of federal acquisition guidance designed to help mitigate the cost, schedule, and performance risk associated with the development of major systems; (3) FAA believed it could develop and install new systems more quickly by combining several of the five phases outlined in this guidance; (4) however, as a result of not following this disciplined, phased approach, FAA often encountered major difficulties such as those associated with developing the Advanced Automation System; (5) in 1995, Congress exempted FAA from many federal procurement rules and regulations; (6) in April 1996, FAA implemented an acquisition management system, which emphasized the need for a disciplined approach to acquisition management; (7) however, GAO found continuing weaknesses in key areas such as how FAA monitors the status of projects throughout their life cycles; (8) FAA has taken a number of steps to overcome problems with past modernization efforts; (9) FAA has moved away from its practice of taking on large, complex projects all at once and is now acquiring new systems by using a more incremental approach; (10) in addition, FAA is no longer making unilateral decisions about air traffic control modernization; (11) instead, it has been working actively with the aviation community to make decisions more collaboratively; (12) furthermore, FAA has begun to address some of the root causes of its modernization problems by implementing processes to help: (a) improve its ability to estimate and account for project costs; (b) develop a complete architecture for modernizing the National Airspace System; (c) reduce the risks associated with software development; and (d) reform the organization's culture, including providing incentives to make managers more accountable; (13) while FAA has delivered some of its major systems, it must be recognized that many of these projects encountered difficulties in meeting their original cost and schedule goals, and the baselines were subsequently revised; and (14) FAA has taken critical steps over the past year to address problems associated with the date change to the year 2000, but much work remains to be done to help ensure that FAA and other key players such as airports have made needed fixes and have contingency plans in place so that operations can continue should problems arise.
The Department of Justice established the ODP in 1998 within the Office of Justice Programs to assist state and local first responders in acquiring specialized training and equipment needed to respond to and manage terrorist incidents involving weapons of mass destruction. ODP, which was transferred to DHS upon its creation in March 2003, has been a principal source of domestic preparedness grant funds. These grants are a means of achieving an important goal—enhancing the ability of first responders to prevent, prepare for, respond to, and recover from terrorist incidents with well-planned and well-coordinated efforts that involve police, fire, emergency medical, public health, and other personnel from multiple jurisdictions. In March 2004, the Secretary of Homeland Security consolidated ODP with the Office of State and Local Government Coordination to form the Office of State and Local Government Coordination and Preparedness (SLGCP). In addition, other preparedness grant programs from agencies within DHS were also transferred to SLGCP. SLGCP, which reports directly to the Secretary, was created to provide a one-stop shop for the numerous federal preparedness initiatives applicable to state and local first responders. Within SLGCP, ODP continues to have program management and monitoring responsibilities for the domestic preparedness grants. From fiscal year 2002 through fiscal year 2005, the amount of domestic preparedness grants awarded by ODP increased from about $436 million to about $3.3 billion. The scope of ODP’s grant programs expanded as well, from funding only first responder advanced equipment, exercises, and administrative activities in fiscal year 2002 to funding a range of preparedness planning activities, exercises, training, equipment purchases, and related program management and administrative costs in fiscal year 2005. During fiscal years 2002 through 2005, the State Homeland Security Grant Program and Urban Areas Security Initiative program accounted for about 69 percent of total ODP grant funds. Table 1 shows the amounts provided for the domestic preparedness grant programs. For fiscal years 2002 through 2005, ODP awarded approximately $2.1 billion in urban area grant funds to selected urban areas identified by DHS. The amount of individual urban area grants is determined through a combination of factors, including current threat estimates, an assessment of each area’s critical assets, and population density. For the same period, ODP awarded approximately $5.1 billion in statewide grant funds to states to enhance domestic preparedness. Under its current funding formula, approximately 40 percent of statewide grant funds are shared equally among states, while the remaining amount is distributed according to state population. Several congressional proposals have been advanced to alter the statewide funding formula to base it more directly on risk considerations. One proposal would largely maintain the portion of funds shared equally by the states but would base the distribution of the remaining funds on a risk- based formula similar to the one currently used for urban area grants. Another proposal from the House Homeland Security Committee would reduce the minimum amount of funding shared equally by states to approximately 14 percent of total funding and establish a board to allocate the remaining funds through an evaluation of threat, vulnerability, and the potential consequences of a terrorist attack. GAO supports a risk-based approach to homeland security. Adoption of a risk management framework can aid in assessing risk by determining which vulnerabilities should be addressed in what ways within available resources. Assessing risk for specific assets or locations is defined by two conditions: (1) probability or likelihood, quantitative or qualitative, that an adverse event would occur, and (2) consequences, the damage resulting from the event, should it occur. Because it is unlikely that sufficient resources will be available to address all risks, it becomes necessary to prioritize both risks and the actions taken to reduce those risks, taking cost into consideration. For example, which actions will have the greatest net potential benefit in reducing one or more risks? Over time, ODP has modified its grant application processes and procedures for awarding grants to states, governing how states distribute funds to local jurisdictions, and facilitating reimbursements for states and localities. To obtain funding, state and urban area grantees must submit applications to ODP and have them approved. In fiscal year 2004, ODP began to streamline the application process. According to ODP, based on feedback from the grantees, and to continue to improve the grant programs, it combined three grant programs into a single grant application solicitation. In fiscal year 2005, the number of combined programs increased to six. ODP stated that the consolidation was done to streamline the grant application process and better coordinate federal, state, and local grant funding distribution and operations. For the statewide grant programs, ODP has allowed the states flexibility in deciding how the grant programs are structured and implemented in their states. In general, states are allowed to determine such things as the following: the formula for distributing grant funds to local jurisdictional units; the definition of what constitutes a local jurisdiction eligible to receive funds, such as a multicounty area; the organization or agency that would be designated to manage the grant program; and whether the state or local jurisdictions would purchase grant-funded items for the local jurisdictions. Urban area grantees, for the most part, have had flexibilities similar to those of the states and could, in coordination with members of the Urban Area Working Group, designate contiguous jurisdictions to receive grant funds. For the first round of the urban area grants in fiscal year 2003, the grants were made directly to the seven urban areas identified as recipients. Starting with the second round of urban area grants in 2003, grants were made to states, which then subgranted the funds to the designated urban areas, but retained responsibility for administering the grant program. The core city and county/counties work with the state administrative agency to define the geographic borders of the urban area and coordinate with the Urban Area Working Group. Once the grant funds are awarded to the states and then subgranted to the local jurisdictions or urban areas, certain legal and procurement requirements have to be met, such as a city council’s approval to accept grant awards. Once these requirements are satisfied, states, local jurisdictions, and urban areas can then obligate their funds for first responder equipment, exercises, training, and services. Generally, when a local jurisdiction or urban area directly incurs an expenditure, it submits related procurement documents, such as invoices, to the state. The state then draws down the funds from the Justice Department’s Office of Justice Programs. According to this office, funds from the U.S. Treasury are usually deposited with the states’ financial institution within 48 hours. The states, in turn, provide the funds to the local jurisdiction or urban area. Since the first announcement of the dramatic increase in first responder grants after the terrorist attacks of September 11, 2001, the speed with which the funding reached localities has been a matter of concern and some criticism. Congress, state and local officials, and others expressed concerns about the time ODP was taking to award grant funds to states and for states to transfer grant funds to local jurisdictions. Beginning in fiscal year 2003, ODP, at congressional direction, demonstrated significant progress in expediting grant awards to states. For the fiscal year 2002 statewide grants, ODP was not required to award funds to states within a specific time frame. During fiscal year 2002, ODP took 123 days to make the statewide grant application available to states and, on average, about 21 days to approve states’ applications after receipt. For the second round of fiscal year 2003 statewide grants, however, the appropriations act required that ODP make the grant application available to states within 15 days of enactment of the appropriation and approve or disapprove states’ applications within 15 days of receipt. According to ODP data, ODP made the grant application for this round of grants available to states within the required deadline and awarded over 90 percent of the grants within 14 days of receiving the applications. The appropriations act also mandated that states submit grant applications within 30 days of the grant announcement. According to ODP data, all states met the statutory 30-day mandate; in fact, the average number of days from grant announcement to application submission declined from about 81 days in fiscal year 2002 to about 23 days for the second round of fiscal year 2003 statewide grants. The transfer of funds from states to local jurisdictions has also received attention from Congress and ODP. To expedite the transfer of grant funds from the states to local jurisdictions, ODP program guidelines and subsequent appropriations acts imposed additional deadlines on states. For the fiscal year 2002 statewide grants, there were no mandatory deadlines or dates by which states should transfer grant funds to localities. One of the states we visited, for example, took 91 days to transfer these grant funds to a local jurisdiction while another state we visited took 305 days. Beginning with the first round of fiscal year 2003 statewide grants, ODP required in its program guidelines that states transfer grant funds to local jurisdictions within 45 days of the grant award date. Congress subsequently included this requirement in the appropriations act for the second round of fiscal year 2003 statewide grant funds. To ensure compliance, ODP required states to submit a certification form indicating that all awarded grant funds had been transferred within the required 45- day period. States that were unable to meet the 45-day period had to explain the reasons for not transferring the funds and indicate when the funds would be transferred. According to ODP, for the first and second rounds of the fiscal year 2003 grants, respectively, 33 and 31 states certified that the required 45-day period had been met. To further assist states in expediting the transfer of grant funds to local jurisdictions, ODP also modified its requirements for documentation to be submitted as part of the grant application process for fiscal years 2002 and 2003. In fiscal year 2002, ODP required states to submit and have approved by ODP budget detail worksheets and program narratives indicating how the grant funds would be used for equipment, exercises, and administration. If a state failed to submit the required documentation, ODP would award the grant funds, with the special condition that the state could not transfer, expend, or draw down any grant funds until the required documentation was submitted and approved. In fiscal year 2002, ODP imposed special conditions on 37 states for failure to submit the required documentation and removed the condition only after the states submitted the documentation. The time required to remove the special conditions ranged from about 1 month to 21 months. For example, in one state we reviewed, ODP awarded the fiscal year 2002 statewide grant funds and notified the state of the special conditions on September 13, 2002; the special conditions were removed about 6 months later on March 18, 2003, after the state had met those conditions. In fiscal year 2003, however, ODP allowed states to move forward more quickly, by permitting them to transfer grant funds to local jurisdictions before all required grant documents had been submitted. If a state failed to submit the required documentation for the first round of fiscal year 2003 statewide grants, ODP awarded the grant funds and allowed the state to transfer the funds to local jurisdictions. While the state and local jurisdictions could not expend—and the state could not draw down—the grant funds until the required documentation was submitted and approved, they could plan their expenditures and begin state and locally required procedures, such as obtaining approval of the state legislature or city council to use the funds. Later that fiscal year, ODP further relaxed this requirement and allowed the states to transfer, expend, and draw down grant funds immediately after ODP awarded the grant funds. The states only had to submit all documentation along with their biannual progress reports. Despite congressional and ODP efforts to expedite the award of grant funds to states and the transfer of those funds to localities, some states and local jurisdictions could not expend the grant funds to purchase equipment or services until other, nonfederal requirements were met. Some state and local officials’ ability to spend grant funds was complicated by the need to meet various state and local legal and procurement requirements and approval processes, which could add months to the process of purchasing equipment after grant funds had been awarded. For example, in one state we visited, the legislature must approve how the grant funds will be expended. If the state legislature is not in session when the grant funds are awarded, it could take as long as 4 months to obtain state approval to spend the funds. Some states, in conjunction with DHS, have modified their procurement practices to expedite the procurement of equipment and services. Officials in two of the five states we visited told us they established centralized purchasing systems that allow equipment and services to be purchased by the state on behalf of local jurisdictions, freeing them from some local legal and procurement requirements. In addition, the DHS’s Homeland Security Advisory Council Task Force reported that several states developed statewide procurement contracts that allow local jurisdictions to buy equipment and services using a prenegotiated state contract. DHS has also offered options for equipment procurement, through agreements with the U.S. Department of Defense’s Defense Logistics Agency and the Marine Corps Systems Command, to allow state and local jurisdictions to purchase equipment directly from their prime vendors. These agreements provide an alternative to state and local procurement processes and, according to DHS, often result in a more rapid product delivery at a lower cost. Congress has also taken steps to address a problem that some states and localities cited concerning a federal law, the Cash Management Improvement Act (CMIA), that provides for reimbursement to states and localities only after they have incurred an obligation, such as a purchase order, to pay for goods and services. Until fiscal year 2005, after submitting the appropriate documentation, states and localities could receive federal funds to pay for these goods and services several days before the payment was due so that they did not have to use their own funds for payment. However, according to DHS’s Homeland Security Advisory Council Task Force report, many municipalities and counties had difficulty participating in this process either because they did not receive their federal funds before payment had to be made or their local governments required funds to be on hand before commencing the procurement process. Officials in one city we visited said that, to solve the latter problem, the city had to set up a new emergency operations account with its own funds. The task force recommended that for fiscal year 2005, ODP homeland security grants be exempt from a provision of CMIA to allow funds to be provided to states and municipalities up to 120 days in advance of expenditures. In response, the fiscal year 2005 DHS appropriations legislation included a provision that exempts formula- based grants (e.g., the State Homeland Security Grant Program grants) and discretionary grants, including the Urban Areas Security Initiative and other ODP grants, from the CMIA’s requirement that an agency schedule the transfer of funds to a state so as to minimize the time elapsing between the transfer of funds from the U.S. Treasury and the state’s disbursement of the funds for program purposes. ODP’s fiscal year 2005 program guidelines informed grantees and subgrantees that they are allowed to draw down funds up to 120 days prior to expenditure. In addition, DHS efforts are under way to identify and disseminate best practices, including how states and localities manage legal and procurement issues that affect grant distribution. DHS’s Homeland Security Advisory Council Task Force reported that some jurisdictions have been “very innovative” in developing mechanisms to support the procurement and delivery of emergency-response-related equipment. The task force recommended that, among other things, DHS should, in coordination with state, county, and other governments, identify, compile, and disseminate best practices to help states address grant management issues. ODP has responded by establishing a new Homeland Security Preparedness Technical Assistance Program service to enhance the grant management capabilities of state administrative agencies and by surveying states to identify their technical needs and best practices they have developed related to managing and accounting for ODP grants, including the procurement of equipment and services at the state and local levels. This information is to serve as a foundation for the development of a tailored, on-site assistance program for states to ensure that identified best practices are implemented and critical grant management needs and problems are addressed. According to ODP, the technical assistance service was made operational in December 2004, however, the final compendium of best grants management practices will not be formally released until May 2005. Despite efforts to streamline local procurement practices, some challenges remain at the state and local levels. An ODP requirement that is based on language in the appropriations act could delay procurements, particularly in states that have a centralized purchasing system. Specifically, beginning with the fiscal year 2004 grant cycle, states were required by law to pass through no less than 80 percent of total grant funding to local jurisdictions within 60 days of the award. In order for states to retain grant funds beyond the 60-day limit, ODP requires states and local jurisdictions to sign a memorandum of understanding (MOU) indicating that states may retain—at the local jurisdiction’s request—some or all funds in order to make purchases on a local jurisdiction’s behalf. The MOU must specify the amount of funds to be retained by the state. This requirement may pose problems for some states. A state official in one state we visited said that, while the state’s centralized purchasing system had worked well in prior years, the state has discontinued using it because of the MOU requirement, since establishing MOUs with every locality might take years. The state transferred the fiscal year 2004 grant funds to local jurisdictions so they can make their own purchases. In another state, officials expressed concern that this requirement would negatively affect their ability to maintain homeland security training provided to local jurisdictions at state colleges that had been previously funded from local jurisdictions’ grant funds. In the fiscal year 2005 grant program guidelines, states were encouraged, but not required, to submit their MOUs to ODP for review by DHS’s Office of General Counsel to ensure compliance. In distributing federal funds to states to assist first responders in preventing, preparing for, and responding to terrorist threats, the federal government has required states to develop strategies to address their homeland security needs as a condition for receiving funding. The details of this federal requirement have also evolved over time. Before the events of September 11, 2001, ODP required states to develop homeland security strategies that would provide a roadmap of where each state should target grant funds. To assist the states in developing these strategies, state agencies and local jurisdictions were directed to conduct needs assessments on the basis of their own threat and vulnerability assessments. The needs assessments were to include related equipment, training, exercise, technical assistance, and research and development needs. In addition, state and local officials were to identify current and required capabilities of first responders to help determine gaps in capabilities. In fiscal year 2003, ODP directed the states to update their homeland security strategies to better reflect post-September 11 realities and to identify progress on the priorities originally outlined in the initial strategies. As required by statute, completion and approval of these updated strategies were a condition for awarding fiscal year 2004 grant funds. ODP has also revised its approach on how states and localities report on grant spending and use. ODP took steps to shift the emphasis away from reporting on specific items purchased and toward results-based reporting on the impact of states’ expenditures on preparedness. ODP maintains an authorized equipment list that includes such diverse items as personal protection suits for dealing with hazardous materials and contamination, bomb response vehicles, and medical supplies. This information is in turn listed on the budget worksheets that localities submitted to states for their review. Until the fiscal year 2004 grant cycle, states were required to submit itemized budget detail worksheets that itemized each item to be purchased under first responder grants. ODP found, however, that, while the worksheets reflected the number and cost of specific items that states and localities planned to purchase, neither states nor ODP had a reporting mechanism to specifically assess how well these purchases would, in the aggregate, meet preparedness planning needs or priorities, or the goals and objectives contained in state or urban area homeland security strategies. Accordingly, ODP revised its approach for fiscal year 2004 and required that states, instead of submitting budget detail worksheets to ODP, submit new “Initial Strategy Implementation Plans” (ISIP). These ISIPs are intended to show how planned grant expenditures for all funds received are linked to one or more larger projects, which in turn support specific goals and objectives in either a state or urban area homeland security strategy. In addition to the ISIPs, ODP now requires the states to submit biannual strategy implementation reports showing how the actual expenditure of grant funds at both the state and local levels was linked by projects to the goals and objectives in the state and urban area strategy. Reports by GAO and DHS’s Office of Inspector General, as well as by the House Homeland Security Committee, have identified the need for clear national guidance in defining the appropriate level of preparedness and setting priorities to achieve it. The lack of such guidance has in the past been identified as hindering state and local efforts to prioritize their needs and plan how best to allocate their homeland security funding. We have reported that national preparedness standards that can be used to assess existing first responder capacities, identify gaps in those capacities, and measure progress in achieving specific performance goals are essential to effectively managing federal first responder grant funds as well as to the ability to measure progress and provide accountability for the use of public funds. ODP has responded to the calls for national preparedness standards and specifically to HSPD-8 that required DHS to develop a new national preparedness goal and performance measures, standards for preparedness assessments and strategies, and a system for assessing the nation’s overall preparedness. In order to develop performance standards that will allow ODP to measure the nation’s success in achieving this goal, ODP is using a capabilities-based planning approach—one that defines the capabilities required by states and local jurisdictions to respond effectively to likely threats. These capability requirements are to establish the minimum levels of capability required to provide a reasonable assurance of success against a standardized set of 15 scenarios for threats and hazards of national significance. The scenarios include such potential emergencies as a biological, nuclear or cyber attack, two natural disasters, and a flu pandemic. The objective is to develop the minimum number of credible, high-consequence scenarios needed to identify a broad range of prevention and response requirements. As part of the HSPD-8 implementation process, in January 2005, ODP issued a list of capability requirements in keeping with a requirement of the fiscal year 2005 DHS appropriations act. To help define the capabilities that jurisdictions should set as targets, ODP first defined the essential tasks that need to be performed from the incident scene to the national level for major events illustrated by the 15 scenarios. It then developed a Target Capabilities List that identifies 36 areas in which responding agencies are expected to be proficient in order to perform these critical tasks. ODP further plans to develop performance measures, on the basis of the target capability standards that define the minimal acceptable proficiency required in performing the tasks outlined in the task list. According to ODP’s plan, the measures will allow the development of a rating methodology that incorporates preparedness resources and information about overall performance into a summary report that represents a jurisdiction’s or agency’s ability to perform essential prevention, response, or recovery tasks. The office acknowledges that this schedule may result in a product that requires future incremental refinements but has concluded that this is preferable to spending years attempting to develop a “perfect” process. On March 31, 2005, DHS issued a document entitled “Interim National Preparedness Goal” that reflects the department’s progress in developing readiness targets, priorities, standards for preparedness assessments and strategies, and a system for assessing the nation’s overall level of preparedness. The document also states that National Preparedness Guidance will follow within 2 weeks. This guidance is to include, in DHS’ words, “detailed instructions on how communities can use the Goal and a description of how the Goal will generally be used in the future to allocate Federal preparedness assistance.” DHS expects to issue a Final Goal and an updated target capabilities list on October 1, 2005. Over the next several months, ODP plans to work with its stakeholders to identify the levels of capabilities that various types of jurisdictions should possess in order for the Nation to reach the desired state of national preparedness. In May 2004, we reported on the use of first responder grant monies in the National Capital Region, which includes the District of Columbia and specified surrounding jurisdictions in the states of Maryland and Virginia. We found that the grant monies were not being spent in accordance with a regional plan for their use. To ensure that emergency preparedness grants and associated funds were managed in a way that maximizes their effectiveness, we recommended that the Secretary of Homeland Security work with NCR jurisdictions to develop a coordinated strategic plan to establish goals and priorities for the use of funds, monitor the plan’s implementation to ensure that funds are used in a way that are not unnecessarily duplicative, and evaluate the effectiveness of expenditures in addressing gaps in preparedness. DHS and the Senior Policy Group of the National Capital Region generally agreed with our recommendations and have been working to implement them. In our report on interoperable communications for first responders, we found that federal assistance programs to state and local government did not fully support regional planning for communications interoperability. We also found that federal grants that support interoperability had inconsistent requirements to tie funding to interoperable communications plans. In addition, uncoordinated federal and state level grant reviews limited the government’s ability to ensure that federal funds were used to effectively support improved regional and statewide communications systems. We recommended that DHS grant guidance encourage states to establish a single statewide body responsible for interoperable communications that would prepare a single comprehensive statewide interoperability plan for federal, state, and local communications systems in all frequency bands. We also recommended that at the appropriate time, that DHS grant guidance should require that federal grant funding for interoperable communications equipment should be approved only upon certification by the statewide body that such grant applications were in conformance with the statewide interoperability plan. In its comments on our draft report, DHS did not address the second recommendation. However, on November 1, 2004, the SAFECOM office with DHS Office of Interoperability and Compatibility issued its methodology for developing a statewide interoperability communications plan. In summary, Mr. Chairman, since the tragic events of September 11, 2001, the federal government has dramatically increased the resources and attention it has devoted to national preparedness and the capabilities of first responders. The grant programs managed by ODP have expanded rapidly in their scope and funding levels. Over the 3-½ years since the terrorist attacks, Congress, ODP, states, and local governments encountered obstacles, some of them frustrating and unexpected, in delivering grant funds to their ultimate recipients in a timely manner and ensuring they are used most effectively. All levels of government have attempted to address these obstacles and succeeded in resolving or ameliorating many of them. Some of the changes made are relatively new; thus, it is still too early to determine if they will have the desired outcome. ODP’s focus has changed over time from examining and approving, for example, specific items of equipment proposed for purchase under first responder grants to defining the capabilities that states and local jurisdictions need to attain—that is, establishing performance standards. Such a results-based orientation could prove to be the most practical and effective grants management approach at the federal level to help ensure accountability and effectiveness of results. DHS must also continue to ensure that an effective system for monitoring and accounting for limited federal funds intended for enhancing the nation’s ability to respond to terrorist attacks or natural disasters exists at the state and local level. DHS’s task of defining a national preparedness goal and translating that definition into capabilities that are meaningful and readily transferable to the wide variety of local jurisdictions around the nation is still not complete. As the department has acknowledged, the process will necessarily be iterative. As we have stressed before, during this process DHS must continue to listen and respond constructively to the concerns of states, local jurisdictions, and other interested parties. Such collaboration will be essential to ensuring that the nation’s emergency response capabilities are appropriately identified, assessed, and strengthened. At the same time, state, local, and tribal governments, and the private sector must recognize that the process is iterative, will include periodic adjustments and refinements, and that risks are not equally distributed across the nation. As we have noted previously, it is important that the quest for speed in distributing and using federal first responder grants does not hamper the planning and accountability needed to ensure that the funds are spent on the basis of a comprehensive, well-coordinated plan to provide first responders with the equipment, skills, and training needed to be able to respond quickly and effectively to a range of emergencies, including, where appropriate, major natural disasters and terrorist attacks. The challenges we noted in developing effective interoperable communications for first responders are applicable to developing effective first responder capabilities for major emergencies, regardless of cause. A fundamental challenge has been limited regional and statewide planning, coordination, and cooperation. No one level of government can successfully address the challenges of developing needed first responder capabilities alone. The federal government can play a leadership role in developing requirements and providing support for state, regional, and local governments to: assess first responder capabilities; identify gaps in meeting those capabilities; develop coordinated plans and priorities for closing those gaps; and assess success in developing and maintaining the needed capabilities. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the subcommittee may have. For further information on this testimony, please contact William O. Jenkins, Jr., at (202) 512-8777. Individuals making key contributions to this testimony included Amy Bernstein, David Brown, Frances Cook, James Cook, Christopher Keisling, Katrina Moss, Sandra Tasic, John Vocino, and Robert White. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In fiscal years 2002 through 2005, the Office for Domestic Preparedness (ODP) within the Department of Homeland Security managed first responder grants totaling approximately $10.5 billion. The bulk of this funding has been for statewide grants through the State Homeland Security Grant Program and urban area grants through the Urban Areas Security Initiative. This testimony provides information on the history and evolution of these two grant programs, particularly with respect to ODP grant award procedures; timelines for awarding and transferring grant funds; and accountability for effective use of grant funds. Federal first responder grants are a means of achieving an important goal--enhancing the ability of first responders to prevent, prepare for, respond to, and recover from terrorist and other incidents with well-planned, well-coordinated efforts that involve a variety of first responders from multiple jurisdictions. ODP has led federal efforts to develop these capabilities in part through its management of federal first responder grants. ODP has modified grant award procedures for states and localities. ODP developed procedures and guidelines for awarding the State Homeland Security Grant Program and the Urban Areas Security Initiative grants to states, and for determining how states and localities could expend funds and seek reimbursement for first responder equipment or services they purchased. As part of this process, ODP gave states some flexibility by allowing them to determine how grant funds were to be managed and distributed within their states and whether purchases would be made locally or at the state level. Congress, ODP, states, and localities have acted to expedite grant awards by setting time limits for the grant application, award, and distribution processes and by instituting other procedures. Nevertheless, the ability of states and localities to spend grant funds expeditiously was complicated by the need to fulfill state and local legal and procurement requirements, which in some cases added months to the purchasing process. Some states have modified their procurement practices, and ODP is identifying best practices to aid in the effort, but challenges remain. ODP has taken steps to improve accountability in the state preparedness planning process, in part by requiring states to update homeland security strategies. In tandem with this effort, ODP revised its grant-reporting method, moving away from requiring states, localities, and urban areas to submit itemized lists of first responder equipment they plan to purchase towards a more results-based approach, whereby grant managers at all levels must demonstrate how grant expenditures are linked to larger projects that support goals in state homeland security strategies. As part of a broader effort to meet mandates contained in Homeland Security Presidential Directive 8, addressing national preparedness goals for all hazards, ODP has taken steps to ensure more assessments of first responder needs are conducted on a national basis. Finally, ODP recently issued interim national preparedness goals that reflect the department's progress in developing readiness targets, priorities, standards for preparedness assessments and strategies, and a system for assessing the nation's overall level of preparedness. However, DHS's task of finalizing these goals and translating them into capabilities that are meaningful and readily transferable to the wide variety of local jurisdictions around the nation is still not complete.
HIPAA authorized the HCFAC program to consolidate and strengthen ongoing efforts to combat fraud and abuse in health care programs and increase resources for fighting health care fraud. The Secretary of HHS, through the HHS OIG, and the Attorney General, administer the HCFAC program. The HCFAC program goals are to: coordinate federal, state, and local law enforcement efforts to control fraud and abuse associated with health plans; conduct investigations, audits, and other studies of delivery and payment for health care for the United States; facilitate the enforcement of the civil, criminal, and administrative statutes applicable to health care; provide guidance to the health care industry, including the issuance of advisory opinions, safe harbor notices, and special fraud alerts; and establish a national database of adverse actions against health care providers. Figure 1 below provides an overview of the HCFAC funding stream, including the related deposits, the allocation of appropriated funds to carry out federal health care law enforcement activities, and the reporting mandate. The types of collections deposited to the HI trust fund and appropriations from this fund, including related expenditures are discussed below. Criminal fines. DOJ prosecutes entities or persons that are involved in commission of a federal health care offense, such as mail fraud related to a health care program. Courts assess criminal fines upon which the criminal debtor is ordered to submit payment(s) to the United States District Court where the case was prosecuted. Each District Court coordinates with DOJ’s local United States Attorneys Office (USAO) to communicate collections received. The Executive Office for United States Attorneys report collection data on a quarterly basis to the Bureau of the Public Debt for deposit into the HI trust fund. Civil monetary penalties. The Social Security Act authorizes the Secretary of HHS to impose civil monetary penalties for improper claims and other violations by health care providers, facilities, and other parties. Centers for Medicare & Medicaid Services (CMS) regional offices impose some civil monetary penalties. CMS’s Office of Financial Management collects the civil monetary penalties imposed on behalf of the Secretary of HHS and allocates payments received based on information the regional offices record in their data collection system. The Office of Financial Management reports collections for civil monetary penalties on a daily basis to the Bureau of the Public Debt for deposit into the HI trust fund. Forfeitures of property. DOJ prosecutions of entities or persons that are involved in a federal health care offense can result in the forfeiture of property. HHS and DOJ reported no property forfeitures creditable to the HI trust fund under HIPAA in the HCFAC reports for fiscal years 2008 and 2009. Penalties and multiple damages. Courts can impose penalties and multiple damages as a result of HHS and DOJ civil suits against those who have knowingly made false health care claims against the government, such as submitting claims for medical services that were not provided. Of all civil debt collections received, DOJ is entitled to keep 3 percent in its Working Capital Fund for expenses incurred in processing and tracking civil and criminal debt collection litigations. Both CMS and DOJ report collections for penalties and multiple damages on a continuous basis to the Bureau of the Public Debt for deposit into the HI trust fund. Gifts and bequests. CMS occasionally receives gifts and bequests and equally splits the amount received between its Medicare Part A and Medicare Part B programs. Gifts and bequests are donations received from individuals or entities, usually in the form of checks. Upon receipt of a donation, CMS records the amount in its accounting system and reports it to the Bureau of the Public Debt for deposit into the HI trust fund. In addition to the types of deposits authorized by HIPAA and discussed above, HHS and DOJ report other types of collections in connection with health care fraud activities such as HHS OIG audit disallowances and court-awarded restitution and compensatory damages. These types of collections represent amounts recovered by HHS and DOJ as a result of health care enforcement activities. These amounts are returned to the HI trust fund to the extent that they represent repayments to Medicare. Funds for the HCFAC program are appropriated from the HI trust fund to an expenditure account, referred to as the Health Care Fraud and Abuse Control Account (HCFAC account) maintained within the HI trust fund. Annually, the HHS Secretary and the Attorney General jointly certify amounts appropriated from the HI trust fund to the HCFAC account as necessary to finance health care fraud and abuse control activities based on statutory limits. HIPAA, as amended, prescribes the maximum amount that may be certified in a given fiscal year. Any unexpended amounts are carried forward to the next fiscal year. Once HCFAC funds have been certified, CMS’s Division of Accounting Operations performs the accounting for appropriations transferred to the HCFAC account. CMS makes funds available by creating allotments in its accounting system to fund related HCFAC expenditures. CMS provides funds to other HHS components and DOJ through intra- and interagency agreements, as shown in figure 1. This process requires HHS and DOJ to bill CMS through the Intra-governmental Payment and Collection (IPAC) System to obtain payment from their allocation of HCFAC funds. In addition to CMS’s central accounting for HFCAC funds, both HHS and DOJ components have processes to separately manage their allotted HCFAC amounts. In general, these processes include authorizing HCFAC-related expenditures, recording applicable payroll and nonpayroll expenditures incurred to a designated HCFAC code, and reporting any unexpended amounts to be carried forward to the next fiscal year. For fiscal year 2009, the Secretary of HHS and the Attorney General certified $266.4 million in mandatory funding to be appropriated from the HI trust fund to the HCFAC account. Additionally, Congress appropriated $198 million in discretionary funding to that account in response to HHS’s fiscal year 2009 budget request, to fund HCFAC program integrity activities. As such, the total appropriated amount to the HCFAC account for fiscal year 2009 was $464.4 million for that year. Figure 2 provides a historical trend of the amounts appropriated to the HCFAC account over the past 13 fiscal years. Funds were first appropriated to HCFAC in fiscal year 1997. HIPAA limited the amounts appropriated for fiscal years 1998 through 2003 to an amount equal to the limit for the preceding fiscal year plus an additional 15 percent. For fiscal years 2004 through 2006, the amount made available was capped at the 2003 limit. The Tax Relief and Health Care Act (TRHCA) also allowed for yearly increases to the HCFAC account based on the change in the consumer price index for all urban consumers (CPI-U) over the previous fiscal year for fiscal years 2007 through 2010. In fiscal year 2006, TRHCA amended HIPAA so that funds allotted from the HCFAC account are available until expended. Beyond 2010, the Patient Protection and Affordable Care Act, as amended by the Health Care and Education Reconciliation Act of 2010, raised the limit on funds that may be certified for the HCFAC account by the Secretary of HHS and the Attorney General by an additional $350 million over the next 10 years, beginning in fiscal year 2011. The annual allocation of appropriation amounts to the HHS and DOJ components are intended to support a variety of anti-fraud and anti-abuse activities. For example, in May 2009, HHS and DOJ established a task force—the Health Care Fraud Prevention and Enforcement Action Team (HEAT)—comprised of top level law enforcement and professional staff from both agencies to prevent health care fraud and enforce current anti- fraud laws around the country. Other examples include HHS OIG investigations, audits, and evaluations that identify vulnerabilities for questionable or fraudulent financial practices related to Medicaid outpatient prescription drug expenditures and Medicare contractor costs. Similarly, DOJ’s USAOs use HCFAC funding to support civil and criminal health care fraud and abuse litigation. HCFAC funds are also used to train attorneys, investigators, and auditors in the investigation and prosecution of health care fraud and abuse; prosecute health care matters through criminal, civil, and administrative proceedings; and conduct investigations, financial, and performance audits of health care programs, inspections, and other evaluations. In our April 2005 report, we identified weaknesses related to not properly capturing certain expenditure data in agency information systems, non- adherence to accounting policy for select HCFAC expenditures, and lengthy HCFAC report review processes. We made recommendations to HHS and DOJ to improve procedures for recording HCFAC expenditures and issuing the annual HCFAC report. Based on our analysis of documentation, HHS and DOJ took actions that addressed three of the four recommendations. Both agencies disagreed with our recommendation to notify Congress on delays in issuing the HCFAC report by the mandated deadline and thus, did not take action. Each of the recommendations and applicable actions taken are described further below. Record staff hours in workload tracking systems. In our 2005 report, we found that two HHS OIG components—Office of Evaluations and Inspections and Office of Investigations—had not recorded all staff hours in their workload tracking systems, which are used to monitor actual hours spent on HCFAC activities. HHS OIG uses the workload tracking systems to monitor HCFAC payroll expenditures, and incomplete information could hinder those efforts. We recommended that the HHS Inspector General require all HHS OIG components to develop procedures for ensuring that all key staff hours spent on HCFAC activities are recorded in the HHS OIG workload tracking systems. In April 2006, the HHS OIG’s Office of Evaluation and Inspections updated its procedures for entering information into its workload tracking system, including guidance related to the completion of timesheets. The procedures instruct employees to record time to specific inspection codes and instruct managers to review staff time recorded in the system. In addition, HHS OIG’s Office of Investigations updated its procedures in October 2009, which require all Office of Investigations personnel to record time and attendance in its workload tracking system. We determined that HHS OIG components’ actions substantially addressed the recommendation. Record expenditure data under the correct account class. In our 2005 report, we noted that only one of the four DOJ components receiving HCFAC funds properly recorded expenditures as required by DOJ policies and procedures. We recommended that the Attorney General develop monitoring procedures to ensure that DOJ components record key HCFAC program expenditure data under the appropriate HCFAC account class in DOJ’s accounting system. In April 2005, DOJ updated its policies and procedures that require individual components to monitor obligations recorded in the accounting system on a quarterly basis. Further, in fiscal year 2009 the Executive Office for the United States Attorneys provided procedures to the USAOs, which receive the largest portion of the HCFAC allocation, on how to charge HCFAC expenditures using the proper program code. According to DOJ officials, component officials review system reports on a regular basis to verify that HCFAC obligations and expenditures are correctly reflected in the accounting system. We determined that DOJ’s actions were sufficient to close the recommendation. Develop a more expedited review process. In our 2005 report, we noted that a lengthy review process within HHS and DOJ resulted in failure to meet the mandated annual January 1 deadline for reporting HCFAC activities. For example, the fiscal year 2003 HCFAC report, the most recent report at the time of that review, was issued 1 year after the mandated reporting date. We recommended that the Secretary of HHS and the Attorney General develop a more expedited review process for the joint annual HCFAC reports. In June 2010, DOJ issued a Report Completion Guide, an expedited review process that provides instructions and time frames for submitting information to complete the annual HCFAC report. The guide introduced a new process whereby DOJ and HHS components utilized a shared website to distribute documents for edit and review. The guide also established time frames for issuing the fiscal year 2010 HCFAC report by January 1, 2011. Using these new time frames, HHS and DOJ jointly issued the fiscal year 2010 HCFAC report on January 24, 2011, only 23 days later than the mandated reporting date. According to DOJ officials responsible for preparing the HCFAC report, they intend to use these time frames to meet the mandated deadline when preparing future year reports. We determined that HHS and DOJ’s efforts meet the intent of our recommendation. However, as discussed later in this report, although timeliness is important, ensuring that the report is accurate and reliable remains a concern. Notify Congress of delays in report issuance. In our 2005 report, we noted that repeated delays in issuing the joint annual HCFAC report impact the relevance of the data being reported. We recommended that the Secretary of HHS and the Attorney General notify congressional oversight committees of delays in issuing the annual report within 1 month of missing the January 1 deadline. HHS and DOJ did not concur with this recommendation. However, as we stated above, both HHS and DOJ developed and implemented a new timeline of dates to edit and review the annual HCFAC report, which resulted in issuing the fiscal year 2010 annual HCFAC report within one month of the mandate. Continuing to take steps to meet these internal deadlines will be necessary to issue the HCFAC report by the mandated deadline to provide timely, relevant information to Congress. Our review of HHS and DOJ policies and procedures showed that both agencies had not designed sufficient controls to help ensure that HCFAC deposits and expenditures were accurately reported. GAO’s Standards for Internal Control in the Federal Government provides that management should establish control mechanisms and activities, and monitor and evaluate these controls. Specifically, during our review we found that HHS and DOJ did not have sufficient controls in their policies and procedures with respect to (1) maintaining and retaining supporting documentation for HCFAC deposits and expenditures and (2) monitoring HCFAC deposits and expenditures to help ensure accurate reporting. From our review of the underlying documentation to support HCFAC activities and nongeneralizable samples of deposits and expenditures, we identified instances in which these design deficiencies resulted in HHS’s and DOJ’s inability to support reported amounts for HCFAC expenditures. We also found errors in reported HCFAC amounts. While HHS and DOJ designed controls that were incorporated into its policies and procedures generally requiring the retention of documentation for 6 years, the policies and procedures for CMS, Administration on Aging, and DOJ did not provide sufficient details with respect to where these documents were to be filed, who should be responsible for maintaining them, or a combination of both, which would help ensure accountability and adequate support of HCFAC deposits and expenditures. Our review found that while the HHS OIG and the Office of the General Counsel policies and procedures for documentation contained sufficient controls as to the type of documentation to be retained, the retention period, the location of records, and the person responsible for maintaining the records, CMS and Administration on Aging policies and procedures were lacking some of these controls. Specifically, CMS policies and procedures for documentation of HCFAC deposits and expenditures, did not identify the person responsible for maintaining supporting documents. We found the same weakness with Administration on Aging policies and procedures for documentation of HCFAC expenditures. Further, Administration on Aging policies and procedures did not specify where those documents should be filed. Officials from the Administration on Aging told us that they are in the process of revising their policies and procedures to address these issues. In its comments, Administration on Aging indicated that it expects to incorporate these changes by summer 2011. We also reviewed DOJ’s controls for retention of documents related to HCFAC deposits and expenditures. DOJ’s procedures for deposits identified controls related to the type of documentation to be retained, the retention period, and the location of records, but they did not specify the person responsible for maintaining supporting documents. Although DOJ’s departmentwide expenditure procedures identified controls related to the type of documentation and the retention period, and indicated that documents should be maintained in the obligation file, the procedures did not specify the location of the obligation file and the person responsible for maintaining the records within each office. During our review, we found instances at HHS and DOJ where documentation was not available to support expenditures. For example, we found that HHS’s Administration on Aging did not maintain, and therefore could not provide, underlying documentation to support how the estimated payroll percentages for fiscal years 2008 and 2009 were derived, which are used to charge payroll expenditures against the HCFAC account on a biweekly basis. In fiscal year 2008, for example, the Administration on Aging charged approximately 29 percent of its total HCFAC allocation, or $879,607, to payroll expenses, an amount that could not be fully supported due to the lack of documentation. Therefore, we were unable to verify the justification of such expenditures. In addition, during our review, we found that DOJ could not provide sufficient documentation to support 12 nongeneralizable payroll sample items selected for fiscal years 2008 and 2009, such as time and attendance reports, workload tracking system reports, and records of actual payroll disbursements. Further, DOJ could not provide documentation to support unexpended amounts carried forward from fiscal year 2008 to fiscal year 2009, totaling $522,278. At the end of each fiscal year, DOJ communicates to HHS the amount of unused funds so that HHS can carry them forward to the following fiscal year via an interagency agreement. To report this amount, DOJ’s Justice Management Division compiles obligation data provided by the different DOJ components. However, DOJ’s Justice Management Division could not locate the documents that supported the amount of funds carried forward in the fiscal year 2009 interagency agreement. GAO’s Standards for Internal Control in the Federal Government provides that internal control be designed to ensure that all transactions and other significant events be clearly documented and the documentation be readily available for examination. The standards also provide that records should be properly managed and maintained and documentation should appear in management directives, administrative policies, or operating manuals. Insufficient controls over documentation increase the risk of not having sufficient support to ensure reported HCFAC amounts are accurate and funds are spent as intended. HHS’s and DOJ’s procedures did not incorporate sufficient monitoring controls to help ensure HCFAC deposits and expenditures were accurately reported. Monitoring of deposits. HHS and DOJ had not designed controls to require the reconciliation of HCFAC deposits recorded in their departmentwide accounting systems to data collection systems or to the HI trust fund statements. Specifically, CMS did not have written procedures that required the reconciliation of civil monetary penalty amounts in the CMS regional offices’ data collection system to CMS’s accounting system. We found two instances from our fiscal year 2009 nongeneralizable sample of deposits where CMS regional offices had made adjustments that were recorded in their data collection system, but not communicated to the Office of Financial Management for recording in CMS’s accounting system, which is used as a source to compile the data for the HCFAC report. These two instances resulted in a $15,066 overstatement error in CMS’s accounting system, which CMS officials corrected after we brought the errors to their attention. Although CMS officials stated that they reconcile the data maintained in both systems on a monthly basis, they were not able to provide us an example of these reconciliation reports. In February 2011, CMS officials told us that they were in the process of developing procedures for the Office of Financial Management to require these reconciliations. In addition, DOJ’s Justice Management Division did not have written procedures that included controls to reconcile deposits of the 3 percent portion of penalties and multiple damages reported in the HI trust fund statements to the agency’s accounting system records. For example, for fiscal year 2009 we identified an overstatement in the amount of $596,266 in the HI trust fund statements when comparing to DOJ’s records. When we inquired about the difference, DOJ officials from the Justice Management Division confirmed that an overstatement had occurred because of adjusting entries that had been communicated to the Bureau of the Public Debt but not captured in the HI trust fund statements. In February 2011, DOJ officials told us that to avoid this from happening in the future, they were in the process of developing procedures that would include monitoring controls for reconciling on a quarterly basis penalties and multiple damages between DOJ records and the statements issued by the Bureau of the Public Debt to ensure reported amounts are accurate and consistent between both agencies. Monitoring of expenditures. Similarly, certain HHS components and DOJ did not have written procedures that incorporated controls for reconciling or comparing HCFAC staff hours to verify the accuracy of payroll expenditures charged against the HCFAC account. Specifically, HHS’s Administration on Aging did not have controls for monitoring HCFAC actual payroll hours. HHS’s Administration on Aging charges payroll expenditures based on estimates made prior to or after the beginning of the year. Because the Administration on Aging does not record hours at the HCFAC activity level, it cannot verify that the payroll expenditures charged against the HCFAC account throughout the year are reasonably accurate. According to officials at the Administration on Aging, they believe that tracking hours at the HCFAC activity level would not be cost effective nor provide better results to justify the costs. However, because the estimated percentage of time charged against the HCFAC account may not represent the actual time spent on HCFAC activities for a given pay period, it is critical that some type of monitoring procedures or verification procedures are designed to help ensure that the payroll expenditures charged to the HCFAC account are reasonable and supported. DOJ’s Civil Rights Division also charges HCFAC payroll expenditures based on estimates. Although Civil Rights Division officials indicated that they track and record actual hours and make adjustments to payroll expenditures if differences are noted, these controls were not documented in DOJ’s policies and procedures. In addition, HHS OIG and the Office of the General Counsel for HHS, as well as DOJ USAO and Civil Division, did not have written procedures that included controls for reconciling or comparing HCFAC hours recorded in workload tracking systems to departmentwide payroll or accounting systems. We found instances where staff hours captured in workload tracking systems did not agree with staff hours recorded in departmentwide payroll or accounting systems. For example, we found that the workload tracking system used by the Office of Counsel to the Inspector General included approximately 10 percent fewer hours for fiscal year 2008 and 7 percent fewer hours in fiscal year 2009 when compared to HHS’s payroll system reports. Similarly, we found two instances from our fiscal year 2008 nongeneralizable sample of expenditures where USAO’s workload tracking system included fewer hours than the HCFAC hours recorded and billed in DOJ’s accounting system, which collectively accounted for a 40 percent difference between the two systems. Failure to complete these reconciliations or comparisons could lead to unsubstantiated payroll expenditures that should not be charged to the HCFAC account. HHS OIG and DOJ officials told us that they were aware of the differences. HHS OIG officials indicated that while they did not have procedures that specifically addressed the reconciliation of data captured in their workload tracking systems, they believed they had other compensating controls, such as periodic inspections of timesheets, to mitigate the risk of inconsistent data between systems. They also noted that they were considering taking actions to revise their policies and procedures to add new monitoring controls to address the need to reconcile data between the systems. Further, DOJ officials indicated that because they spend significantly more resources on HCFAC activities than the sum that is allocated to DOJ from the HCFAC account, it is not cost beneficial to require personnel to record their time consistently in both systems. Also, according to DOJ officials, although not formally documented, each component has processes to monitor HCFAC expenditures to ensure they do not over-bill the HCFAC account. For example, the officials stated that the Criminal Division performs quarterly reviews of percentages used to charge payroll expenditures against the HCFAC account. Not having policies and procedures to ensure that sufficient controls over HCFAC expenditures are in place could result in misstatements and ultimately hinder HHS and DOJ managers in preparing meaningful budgets to support future HCFAC funding requests. Monitoring of annual report compilation. Also, we found that although DOJ issued the Report Completion Guide in June 2010 that specified time frames for both HHS and DOJ for submitting information to complete the annual HCFAC report, the guide did not require that monitoring control activities, such as comparisons and supervisory reviews, be performed to help ensure that reported amounts were accurately presented. During our review of the HCFAC reports, we found presentation errors of $245.7 million and $717.5 million for fiscal years 2008 and 2009, respectively, of the total amounts reported as transferred to the HI trust fund. For example, for fiscal year 2009 we found that $716.8 million of the $1.0 billion reported in the restitution and compensatory damages line item was not transferred to the HI trust fund as stated in the report. Of the $716.8 million, $441.0 million related to Medicaid, $245.4 million related to Medicare Part B, and $30.4 million represented a double-counting error. The $30.4 million double-counting error related to civil monetary penalties and CMS’s portion of penalties and multiple damages, which were already reported under separate line items. HHS and DOJ disclosed the double- counting error and the Medicaid presentation error in the fiscal year 2010 annual HCFAC report issued on January 24, 2011. Recoveries for Medicare Part B and Medicaid are not transferred to the HI trust fund, but instead are to be transferred to the Federal Supplementary Medical Insurance (SMI) Trust Fund and the Medicaid appropriation account within CMS, respectively. We found a similar issue in the fiscal year 2008 HCFAC report, where the total amount reported as transferred to the HI trust fund included Medicare Part B recoveries totaling $245.7 million. These inaccuracies overstated the amount of funds transferred to the HI trust fund. In addition, CMS officials told us that the amounts reported in the HHS OIG audit disallowances line item, totaling about $662.5 million and $360.2 million for fiscal years 2008 and 2009, respectively, included both Medicare and Medicaid recoveries. As stated above, Medicaid recoveries are to be transferred to the Medicaid appropriation account within CMS rather than the HI trust fund. However, these officials stated that the dollar amount associated with each type of recovery could not be determined because the current system does not readily distinguish between Medicare and Medicaid recoveries for amounts previously reported in the HCFAC report. Full disclosures had not been made in the report to inform readers that reported amounts included Medicare Part B and Medicaid, which are not transferred to the HI trust fund. In the fiscal years 2008 and 2009 HCFAC reports, HHS and DOJ incorrectly indicated in footnotes that reported amounts did not include Medicaid funds. CMS officials indicated they will separately report Medicare and Medicaid recoveries related to HHS OIG audit disallowances in future HCFAC reports. According to CMS officials, they plan to accomplish this by manually tracking Medicare and Medicaid recoveries. In addition, DOJ officials told us that they are in the process of developing written guidance on the preparation of the annual HCFAC report and anticipate issuance by June 2011. GAO’s Standards for Internal Control in the Federal Government provides that internal control should generally be designed to assure that ongoing monitoring occurs in the course of normal operations, including regular management and supervisory activities, comparisons, reconciliations, and other actions people take in performing their duties. Having detailed written policies and procedures that incorporate these key monitoring controls decreases the risk of reporting inaccurate HCFAC data that could be misleading to Congress when judging the success of the program. Although HHS and DOJ have taken action to address our previous recommendations aimed at improving procedures for recording HCFAC expenditures and issuing the annual HCFAC report, we found that controls are not sufficient to ensure that the report is accurate and supported. As HHS and DOJ accelerate the reporting process in an attempt to complete the report by the January 1 mandated reporting deadline, it is important that they establish controls that are designed to provide complete, accurate, and reliable information in the annual HCFAC report. Based on our review of the fiscal years 2008 and 2009 HCFAC reports, HHS and DOJ do not have sufficient controls for maintaining and retaining documentation and performing monitoring such as reconciliation and review activities to ensure accurate and consistent reporting of HCFAC deposits and expenditures. These design weaknesses led to instances where documentation was not readily available and amounts included in the HCFAC reports contained errors. Until HHS and DOJ strengthen their controls for documenting and monitoring HCFAC reporting processes, their ability to provide Congress with an accurate and timely annual report of HCFAC activities will continue to be compromised. Inaccuracies in the mandated annual report limit its usefulness to congressional decision makers and other interested parties. We are making the following 11 recommendations to HHS and DOJ to improve controls over the accounting and reporting of HCFAC activities. We recommend that the Secretary of HHS, direct the Administrator of CMS to: revise procedures for properly maintaining supporting documentation for HCFAC deposits and expenditures, to include specifying the titles of staff responsible for maintaining supporting documentation; develop written procedures that incorporate monitoring controls for HCFAC deposit information recorded in the departmentwide accounting system, including reconciling the deposit data in this system to the regional offices’ data collection system; direct the Assistant Secretary for Aging to: revise the Administration on Aging’s procedures for properly maintaining supporting documentation for HCFAC expenditures, to include specifying the titles of staff responsible for maintaining supporting documentation and the location of records; develop written procedures that incorporate monitoring controls to verify that the payroll expenditures charged against HCFAC are reasonable and supported; and direct the Acting General Counsel to develop written procedures that incorporate monitoring controls for the Office of the General Counsel staff hours related to HCFAC activities captured in workload tracking systems, including the reconciliation to staff hours captured in the departmentwide payroll system; and develop written procedures in collaboration with DOJ that incorporate monitoring controls for preparing the joint annual HCFAC report to help ensure reported amounts are accurate. We recommend that the HHS Inspector General develop written procedures that incorporate monitoring controls for HHS OIG staff hours related to HCFAC activities captured in workload tracking systems, including the reconciliation to staff hours captured in the departmentwide payroll system. We recommend that the Attorney General direct the Deputy Assistant Attorney General / Controller to: revise procedures for properly maintaining supporting documentation for HCFAC deposits and expenditures, to include specifying the titles of staff responsible for maintaining supporting documentation and the location of records; develop written procedures that incorporate monitoring controls for reconciling HCFAC deposits of the 3 percent portion of penalties and multiple damages information recorded in the departmentwide accounting system to the HI trust fund statements; develop written procedures that incorporate monitoring controls to verify that the payroll expenditures charged against HCFAC are reasonable and supported; and develop written procedures in collaboration with HHS that incorporate monitoring controls for preparing the joint annual HCFAC report to help ensure reported amounts are accurate. We provided a draft of this report to HHS and DOJ for review and comment. Written comments from the HHS Assistant Secretary for Legislation are reproduced in appendix III. DOJ indicated via e-mail that it agreed with the findings and the four recommendations we made to revise or develop written procedures that include documentation and monitoring controls for HCFAC activities and reporting. While DOJ did not provide written comments, it provided technical comments, as did HHS, that we incorporated as appropriate. We made a total of 11 recommendations, 7 to HHS and 4 to DOJ. Of the seven recommendations we made, in its written comments, HHS generally agreed with five, disagreed with one, and did not address the remaining recommendation. Specifically, HHS agreed with our recommendation related to revising the Administration on Aging’s procedures for properly maintaining supporting documentation for HCFAC expenditures and stated that it plans to incorporate these changes by summer 2011. Also, HHS agreed with our recommendation to develop written procedures for preparing the joint annual HCFAC report and indicated that it has begun to work with DOJ to improve the Report Completion Guide. In addition, HHS OIG agreed with our recommendation to develop written procedures that incorporate monitoring controls for staff hours related to HCFAC activities recorded in its workload tracking systems and stated that it will incorporate these procedures into its formal policies. Further, the Administration on Aging stated its view that addressing our recommendation to develop written procedures to verify that HCFAC payroll expenditures are reasonable and supported would not provide material results to justify the additional expense and workload. However, the Administration on Aging agreed to explore other options to refine its HCFAC payroll expenditures. The Office of the General Counsel agreed and stated that it had addressed our recommendation to develop written procedures that incorporate monitoring controls for staff hours recorded in the workload tracking system. It stated that on February 2, 2011, it provided us procedures for properly accounting for HCFAC expenditures. While we received procedures from the Office of the General Counsel, these procedures did not address our finding. Instead, the procedures discussed the transfer of payroll expenditures to the HCFAC account. Therefore, we determined that this recommendation has not been addressed. CMS disagreed with our recommendation to revise procedures for maintaining supporting documentation for HCFAC deposits and expenditures, which include specifying the titles of staff responsible for maintaining supporting documentation. CMS stated that the National Archives and Records Administration does not require staff titles on a standard form it prescribes for transferring records to approved records facilities (SF-135) and that CMS requires staff to take records retention training each year. CMS also stated it believes the information maintained is sufficient to ensure accountability and proper and consistent supporting documentation for HCFAC deposits and expenditures. However, we continue to believe that CMS’s policies and procedures for documentation are insufficient, as they do not identify the staff responsible for maintaining documentation as required by the National Archives and Records Administration regulations. Also, CMS stated that the creation of a new records retention system for HCFAC records would be duplicative and unnecessary. We do not believe that a new records retention system exclusively for HCFAC records is necessary to achieve accountability for documentation responsibilities. Rather, a modification to CMS’s existing procedures that identifies the responsible staff by title to show authority levels for properly maintaining supporting documentation, help provide continuity when staff change positions, and promote accountability would be sufficient to address this shortcoming. Lastly, in its comments, CMS did not address our remaining recommendation to develop written procedures that incorporate monitoring controls for HCFAC deposit information recorded in the departmentwide accounting system. However, as we stated in the report, CMS officials told us in February 2011 that they were in the process of developing procedures that require the reconciliation of HCFAC deposit information. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the report date. At that time, we will send copies to the Secretary of HHS, the Attorney General, and other interested parties. The report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9312 or dalykl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The objectives of this review were to determine to what extent the Department of Health and Human Services (HHS) and the Department of Justice (DOJ) (1) took action to address the recommendations we made in 2005 and (2) designed effective controls over reporting Health Care Fraud and Abuse Control (HCFAC) program deposits and expenditures for fiscal years 2008 and 2009. To address the extent to which HHS and DOJ took action to address the recommendations made in 2005, we: Obtained and reviewed documentation provided by HHS and DOJ such as policies and procedures and the Report Completion Guide. Interviewed officials at HHS and DOJ including the Acting Deputy Inspector General and the Assistant Director of the Executive Office for United States Attorneys to identify actions to improve HCFAC program operations. To address the extent to which HHS and DOJ designed effective controls over reporting HCFAC deposits and expenditures, we: Obtained and reviewed relevant HHS and DOJ policies and procedures for reporting deposits and expenditures within each agency. Used criteria outlined in our Standards for Internal Control in the Federal Government, specifically as it relates to control activities and monitoring, to assess the effectiveness of controls over the reporting of amounts related to deposits and expenditures. We applied these standards to assess whether the design of the controls documented in the policies and procedures reasonably assured accurate and consistent reporting of HCFAC amounts in the joint annual HCFAC report. We did not verify the validity or accuracy of the reported amounts. Assessed the reliability of data used to select our nongeneralizable tracing deposit control totals of the electronic databases to the corresponding deposit line item totals reported in the HCFAC reports and the Bureau of the Public Debt’s Federal Hospital Insurance Trust Fund (HI trust fund) statements; obtaining the funding decision memorandum detailing how the HCFAC funds would be distributed between HHS and DOJ for fiscal years 2008 and 2009 to verify the HCFAC funds certified by HHS and DOJ officials; comparing amounts reported in the joint HCFAC reports to the approved funding decision memorandum and comparing amounts from the decision memorandum to the Office of Management and Budget (OMB) documentation (Apportionment Schedule SF-132) to verify that the amounts were made available; tracing total expenditure amounts to supporting documentation, including electronic databases, billing packages, and intra- and interagency agreements; and reviewing existing information about the electronic data and the systems that produced them. We determined that the data were sufficiently reliable to select our samples. Selected a nongeneralizable stratified random sample for each of the deposit types (gifts and bequests, criminal fines, civil monetary penalties, and penalties and multiple damages) that HHS and DOJ reported a dollar amount greater than zero in the fiscal years 2008 and 2009 annual HCFAC reports. We selected a total of 47 deposit transactions for fiscal year 2008 and 55 transactions for fiscal year 2009. Transaction selection criteria included various factors such as dollar amounts and transaction volume. For the selected transactions, we reviewed various sources of documentation depending on the type of deposit to determine whether dollar amounts were accurately reported. Examples of supporting documentation for deposits included check registers; electronic fedwires; health care tracking forms used to allocate deposit collections among the various health care programs; judgment orders and agency letters identifying applicable fines and penalties assessed; and collection system query reports. These randomly selected transactions were designed to provide additional details about the processing of those transactions and were not intended to be representative of the universe of HCFAC transactions. See appendix II for information about the universe of transactions and our sampled items. Selected a nongeneralizable random sample of expenditures for each of the agency components that were allocated HCFAC appropriation funds as reported in the annual HCFAC reports for fiscal years 2008 and 2009. For the Centers for Medicare & Medicaid Services (CMS) and the United States Attorneys Office (USAO), we obtained electronic databases and selected a nongeneralizable stratified random sample for those agency components. We selected a total of 63 transactions for fiscal year 2008 and 62 transactions for fiscal year 2009 related to payroll and nonpayroll expenditures. Transaction selection criteria included various factors such as dollar amounts, transaction volume, and source of information. For these transactions, we reviewed various sources of documentation depending on the type of expenditure to determine whether dollar amounts were accurately reported and the use of funds were consistent with the Health Insurance Portability and Accountability Act of 1996 (HIPAA). Examples of supporting documentation for expenditures included workload tracking system and payroll system query reports; time and attendance reports; salary forms; invoices; contracts; and travel vouchers. These randomly selected transactions were designed to provide additional details about the processing of those transactions and were not intended to be representative of the universe of HCFAC transactions. See appendix II for information about the universe of transactions and our sampled items. Performed additional procedures for HHS Office of Inspector General (OIG) payroll transactions as this component received 67 percent and 42 percent of total HCFAC appropriations allocated for fiscal years 2008 and 2009, respectively. Specifically, we (1) obtained time reports from workload tracking systems for all four OIG components (Office of Audit Services, Office of Investigations, Office of Evaluations and Inspections, and Office of Counsel to the Inspector General) to determine if the projects identified as HCFAC were properly classified; and (2) compared the number of hours in the workload tracking systems to the number of hours in the HHS payroll system to determine if the components’ systems included hours for all staff. Interviewed agency officials from HHS and DOJ including budget analysts and financial specialists to obtain an understanding and clarification of the processes used to report deposits to the HI trust fund and appropriations from this fund, including related expenditures. We conducted our work from February 2010 through May 2011 in accordance with U.S. generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. During our review of fiscal years 2008 and 2009 Health Care Fraud and Abuse Control (HCFAC) program activities, we selected nongeneralizable samples to further understand the Department of Health and Human Services (HHS) and the Department of Justice (DOJ) procedures for HCFAC deposits and expenditures. For deposits, we stratified the data and selected random transactions, as summarized in table 1 below, for each of the deposit types authorized by the Health Insurance Portability and Accountability Act of 1996 (HIPAA) for which HHS and DOJ reported dollar amounts greater than zero in the fiscal years 2008 and 2009 HCFAC reports. For expenditures, we selected samples from object classes that in aggregate accounted for 50 percent or more of total obligations for each component that received HCFAC funds. Based on dollar amounts, we then selected random transactions, as summarized in table 2 below. Appendix III: Comments from the Department of Health and Human Services GENERAL COMMENTS OF THE DEPARTMENT OF HEALTH AND HUMAN SERVICES (HHS) ON THE GOVERNMENT ACCOUNTABILITY OFFICE’S (GAO) DRAFT REPORT ENTITLED, “HEALTH CARE FRAUD AND ABUSE CONTROL PROGRAM: IMPROVEMENTS NEEDED IN CONTROLS OVER REPORTING DEPOSITS AND EXPENDITURES” (GAO-11-446) The Department appreciates the opportunity to comment on this draft report. GAO Recommendation No. 1 We recommend that the Secretary of HHS direct the Administrator of CMS to: revise procedures for properly maintaining supporting documentation for HCFAC deposits and expenditures, to include specifying the titles of staff responsible for maintaining supporting documentation; develop written procedures that incorporate monitoring controls for HCFAC deposit information recorded in the departmentwide accounting system, including reconciling the deposit data in this system to the regional offices’ data collection system Centers for Medicare and Medicaid Services (CMS) Response CMS disagrees with GAO’s assertion that we have inadequate controls or procedures regarding retention of HCFAC related records because we do not explicitly identify the title of staff responsible for management of such records. Standard Form 135, Records Transmittal and Receipt, which is prescribed by the National Archives and Records Administration (NARA), accompanies all records stored in an approved facility, and includes the Agency contact name, office, and telephone number, but NARA does not require staff titles. Each CMS component is responsible for following all federal records retention policies for its HCFAC-related functions. Also, all CMS employees are required to take records retention training each year, and are responsible for appropriately and consistently applying CMS’ Records Management guidelines. CMS believes the information it maintains is sufficient to ensure accountability and proper and consistent supporting documentation for HCFAC deposits and expenditures. CMS believes that specifying the titles of particular CMS staff is unwarranted and that creation of a new records retention system uniquely for HCFAC records would be duplicative and unnecessary. GAO Recommendation No. 2 We recommend that the Secretary of HHS direct the Assistant Secretary for Aging to: revise the Administration on Aging’s procedures for properly maintaining supporting documentation for HCFAC expenditures, to include specifying the titles of staff responsible for maintaining supporting documentation and the location of records; develop written procedures that incorporate monitoring controls to verify that the payroll expenditures charged against HCFAC are reasonable and supported Administration on Aging (AoA) Response We agree with the first part of the recommendation, on the maintenance of supporting documentation and will move forward to incorporate these changes into our filing system by Summer 2011, to the extent feasible. Concerning the second part of the recommendation, on developing written procedures incorporating monitoring controls, we have used the same approach to capture HCFAC payroll charges for more than a decade. At the beginning of each year, AoA surveys each of the GENERAL COMMENTS OF THE DEPARTMENT OF HEALTH AND HUMAN SERVICES (HHS) ON THE GOVERNMENT ACCOUNTABILITY OFFICE’S (GAO) DRAFT REPORT ENTITLED, “HEALTH CARE FRAUD AND ABUSE CONTROL PROGRAM: IMPROVEMENTS NEEDED IN CONTROLS OVER REPORTING DEPOSITS AND EXPENDITURES” (GAO-11-446) approximately twenty-seven staff in Headquarters and the Regional Offices who work on HCFAC activities to determine the percentage of time each individual estimates that they will spend, on average, over the entire year on HCFAC. The Accounting for Pay System (AFPS) is then used to allocate the same estimated percentage of the person’s pay to HCFAC funding each pay period, regardless of the number of hours the person actually works on HCFAC during those specific two weeks. The percentages allocated for most staff are small—no more than ten to twenty-five percent in most cases. In the aggregate this resulted in FY 2008 in approximately 29%, or $879,607, of AoA’s $3.1 million FY 2008 HCFAC allocation being used for personnel costs, a reasonable percentage given that these dollars are used to provide administrative and related support for AoA’s Senior Medicare Patrol anti-fraud education program. AoA uses the Department’s ITAS system to track actual hours worked by employees, and this system does not have the ability to track hours spent on one type of activity versus another. To implement the type of controls that GAO recommends would therefore require the establishment of a completely new, wholly separate tracking system from that used by the Department for time and attendance purposes. Such a system would require employees to track and log hourly HCFAC activity, to cumulate and report that activity to a central tracking point of contact, and then require this information to periodically be used to adjust the labor distributions in the accounting system for each of those individuals. While this approach could result in a more accurate accounting, it would do so at a price of a very substantially increased workload at every level disproportionate to the amount of HCFAC funding received. Further, some time ago, AoA eliminated requirements that employees sign in and out and keep a written record of their time and attendance in connection with collective bargaining agreements. AoA does not believe that the penny-wise approach favored by GAO would have sufficiently material results as to justify the additional expense and workload. AoA is willing, however, to engage in exploratory discussions with other Operating Divisions within the Department who may be faced with similar situations—whether or not related to HCFAC dollars—to determine if there are other approaches which might allow it either to further refine the payroll charges or to further adjust a subsequent year’s estimated percent of time spent on HCFAC. If a less resource intensive solution could be identified, AoA would be open to its implementation. GAO Recommendation No. 3 We recommend that the Secretary of HHS direct the Acting General Counsel to: develop written procedures that incorporate monitoring controls for the Office of the General Counsel staff hours related to HCFAC activities captured in workload tracking systems, including the reconciliation to staff hours captured in the departmentwide payroll system GENERAL COMMENTS OF THE DEPARTMENT OF HEALTH AND HUMAN SERVICES (HHS) ON THE GOVERNMENT ACCOUNTABILITY OFFICE’S (GAO) DRAFT REPORT ENTITLED, “HEALTH CARE FRAUD AND ABUSE CONTROL PROGRAM: IMPROVEMENTS NEEDED IN CONTROLS OVER REPORTING DEPOSITS AND EXPENDITURES” (GAO-11-446) Office of the General Counsel (OGC) Response At the time of this review, OGC did not have written procedures in-place to clarify 1) its method of using Practice Manager (PM) workload reports to transfer pay between non-HCFAC and HCFAC specific Common Accounting Numbers (CANs) in the Department’s Accounting For- Pay System (AFPS) and 2) its corresponding reconciliation process. Financial data reconciliation is common practice for government budget staff as part of their regular duties/responsibilities as budget analysts and thus routinely occurs in the absence of specific documented procedures. Nonetheless, prior to the issuance of this draft report, OGC developed such policy guidance in order to detail its procedures for properly accounting for its associated cost-of-work performed on behalf of HCFAC. OGC provided this HCFAC Account Expenditures Policy Guidance to GAO on February 2, 2011. GAO Recommendation No. 4 We recommend that the Secretary of HHS develop written procedures in collaboration with DOJ that incorporate monitoring controls for preparing the joint annual HCFAC report to help ensure reported amounts are accurate. HHS Response The Department agrees that having written procedures, or guidance, for the joint annual HCFAC report will work to ensure accurate reporting. To further this goal, HHS and DOJ developed the Department of Health and Human Services and Department of Justice Annual Report on the Health Care Fraud and Abuse Control Program: Report Completion Guide as part of the FY 2010 HCFAC Report process. DOJ shared this manual with GAO in the fall of 2010. However, in response to this GAO report, HHS and DOJ have been working collaboratively to improve the content of this manual, specifically to ensure the accuracy of the numbers included in the “Monetary Results” table of the annual HCFAC report. The updated manual will include a detailed matrix that describes each transfer or deposit figure included in the “Monetary Results” table of the HCFAC report, which agency is responsible for reporting that figure, the source of the data, and who is responsible for verifying the final numbers published in the Annual Report. This report, or guidance, will be distributed annually to all relevant HHS Operating Divisions as well as all relevant components within DOJ. GAO Recommendation No. 5 We recommend that the HHS Inspector General (OIG) develop written procedures that incorporate monitoring controls for HHS OIG staff hours related to HCFAC activities captured in workload tracking systems, including the reconciliation to staff hours captured in the departmentwide payroll system. GENERAL COMMENTS OF THE DEPARTMENT OF HEALTH AND HUMAN SERVICES (HHS) ON THE GOVERNMENT ACCOUNTABILITY OFFICE’S (GAO) DRAFT REPORT ENTITLED, “HEALTH CARE FRAUD AND ABUSE CONTROL PROGRAM: IMPROVEMENTS NEEDED IN CONTROLS OVER REPORTING DEPOSITS AND EXPENDITURES” (GAO-11-446) Office of Inspector General (OIG) Response HHS OIG concurs with this recommendation. OIG continues to assert that our workload tracking accurately captures total staff hours and the staff hours allocable to HCFAC-related activities. Nevertheless, HHS OIG agrees to incorporate these implemented procedures into formal written policies that will include reconciliation of overall staff hours to the department- wide payroll system. In addition to the contact listed above, Carla J. Lewis (Assistant Director), Maria C. Belaval, Sharon O. Byrd, William L. Evans, Maria Hasan, Christopher N. Howard, Jason S. Kirwan, Mitchell D. Owings, and Nina M. Rostro made significant contributions to this report.
To help combat fraud and abuse in health care programs, including Medicare and Medicaid, Congress enacted the Health Care Fraud and Abuse Control (HCFAC) program as part of the Health Insurance Portability and Accountability Act of 1996 (HIPAA). HIPAA requires that the Departments of Health and Human Services (HHS) and Justice (DOJ) issue a joint annual report to Congress on amounts deposited to and appropriated from the Federal Hospital Insurance (HI) Trust Fund for the HCFAC program. In April 2005, GAO reported on the results of its review of HCFAC program activities for fiscal years 2002 and 2003 and made recommendations to HHS and DOJ. The objectives of this requested review were to assess the extent to which HHS and DOJ (1) took actions to address the recommendations made in the 2005 report and (2) designed effective controls over reporting HCFAC deposits and expenditures for fiscal years 2008 and 2009. GAO reviewed HHS and DOJ documentation; selected nongeneralizable samples; and interviewed agency officials. Although HHS and DOJ have taken action to address our previous recommendations aimed at improving procedures for recording HCFAC expenditures and issuing the annual HCFAC report, GAO found that controls are not sufficient to ensure that the report is accurate and supported. HHS and DOJ took action to address three of the four recommendations in GAO's 2005 report related to recording staff hours in agency workload tracking systems, using the appropriate account class to record HCFAC expenditure data, and expediting the review process for issuing the annual HCFAC report. Neither agency agreed with the remaining recommendation to notify Congress on delays in issuing the HCFAC report within 1 month after missing the mandated January 1 deadline and thus, did not take action. However, in June 2010, HHS and DOJ implemented an expedited review process for completing the HCFAC report. The fiscal year 2010 HCFAC report was issued on January 24, 2011, 23 days later than the mandated reporting date. According to DOJ officials responsible for preparing the HCFAC report, they intend to use this new expedited review process to meet the mandated deadline when preparing future year reports. Regarding the design of controls, while HHS and DOJ had designed polices and procedures for documentation that generally required the retention of documentation for 6 years, these did not provide sufficient controls to ensure adequate support of HCFAC deposits and expenditures, in accordance with internal control standards. (1) Components at both HHS and DOJ that manage HCFAC activities did not include in their respective policies and procedures controls that specified the person responsible for maintaining the records, the location of records, or a combination of both. (2) GAO found instances at HHS and DOJ where documentation could not be provided to support HCFAC expenditures, such as time and attendance reports. Also, both agencies did not have sufficient monitoring controls such as reconciliations, comparisons, and supervisory reviews, as outlined in internal control standards, to ensure accurate reporting of HCFAC deposits and expenditures. As a result, GAO found instances where data recorded in accounting and payroll systems were inconsistent with other sources such as the HI trust fund statements and agency workload tracking systems. GAO also identified presentation errors in the 2008 and 2009 annual HCFAC reports. For example, in reviewing the line item for restitution and compensatory damages, GAO found that $717 million (70 percent) of the $1.03 billion reported in the fiscal year 2009 HCFAC report was not transferred to the HI trust fund as stated in the report. These amounts, primarily related to Medicare Part B and Medicaid, were transferred to the Federal Supplementary Medical Insurance Trust Fund and the Medicaid appropriation account as required. These inaccuracies overstated the amount of funds transferred to the HI trust fund. GAO makes 11 recommendations to HHS and DOJ to revise or develop written procedures that include documentation and monitoring controls for HCFAC activities and reporting. DOJ agreed with all four of its recommendations. Of the seven recommendations to HHS, it generally agreed with five, disagreed with one, and did not address the remaining recommendation.
National security challenges covering a broad array of areas, ranging from preparedness for an influenza pandemic to Iraqi governance and reconstruction, have necessitated using all elements of national power— including diplomatic, military, intelligence, development assistance, economic, and law enforcement support. These elements fall under the authority of numerous U.S. government agencies, requiring overarching strategies and plans to enhance agencies’ abilities to collaborate with each other, as well as with foreign, state, and local governments and nongovernmental partners. Without overarching strategies, agencies often operate independently to achieve their own objectives, increasing the risk of duplication or gaps in national security efforts that may result in wasting scarce resources and limiting program effectiveness. Strategies can enhance interagency collaboration by helping agencies develop mutually reinforcing plans and determine activities, resources, processes, and performance measures for implementing those strategies. Strategies can be focused on broad national security objectives, like the National Security Strategy issued by the President, or on a specific program or activity, like the U.S. strategy for Iraq. Strategies have been developed by the Homeland Security Council, such as the National Strategy for Homeland Security; jointly with multiple agencies, such as the National Strategy for Maritime Security, which was developed jointly by the Secretaries of Defense and Homeland Security; or by an agency that is leading an interagency effort, such as the National Intelligence Strategy, which was developed under the leadership of the Office of the Director of National Intelligence. Congress recognized the importance of overarching strategies to guide interagency efforts, as shown by the requirement in the fiscal year 2009 National Defense Authorization Act for the President to submit to the appropriate committees of Congress a report on a comprehensive interagency strategy for public diplomacy and strategic communication of the federal government, including benchmarks and a timetable for achieving such benchmarks, by December 31, 2009. Congress and the administration will need to examine the ability of the executive branch to develop and implement overarching strategies to enhance collaboration for national security efforts. Although some U.S. government agencies have developed or updated overarching strategies since September 11, 2001, the lack of information on roles and responsibilities and lack of coordination mechanisms in these strategies can hinder interagency collaboration. Our prior work, as well as that by national security experts, has found that strategic direction is required as the basis for collaboration toward national security goals. Overarching strategies can help agencies overcome differences in missions, cultures, and ways of doing business by providing strategic direction for activities and articulating a common outcome to collaboratively work toward. As a result, agencies can better align their activities, processes, and resources to collaborate effectively to accomplish a commonly defined outcome. Without having the strategic direction that overarching strategies can provide, agencies may develop their own individual efforts that may not be well-coordinated with that of interagency partners, thereby limiting progress in meeting national security goals. Defining organizational roles and responsibilities and mechanisms for coordination—one of the desirable characteristics for strategies that we have identified in our prior work—can help agencies clarify who will lead or participate in which activities, organize their joint activities and individual efforts, facilitate decision making, and address how conflicts would be resolved. The lack of overarching strategies that address roles and responsibilities and coordination mechanisms—among other desirable characteristics that we have identified in our prior work—can hinder interagency collaboration for national security programs at home and abroad. We have testified and reported that in some cases U.S. efforts have been hindered by multiple agencies pursuing individual efforts without overarching strategies detailing roles and responsibilities of organizations involved or coordination mechanisms to integrate their efforts. For example, we have found the following: Since 2005, multiple U.S. agencies—including the State Department, U.S. Agency for International Development (USAID), and Department of Defense (DOD)—had led separate efforts to improve the capacity of Iraq’s ministries to govern without overarching direction from a lead entity to integrate their efforts. As we have testified and reported, the lack of an overarching strategy contributed to U.S. efforts not meeting their goal of key Iraqi ministries having the capacity to effectively govern and assume increasing responsibility for operating, maintaining, and further investing in reconstruction projects. In July 2008 we reported that agencies involved in the Trans-Sahara Counterterrorism Partnership had not developed a comprehensive, integrated strategy for the program’s implementation. The State Department, USAID, and DOD had developed separate plans related to their respective program activities that reflect some interagency collaboration, for example, in assessing country needs for development assistance. However, these plans did not incorporate all of the desirable characteristics for strategies that we have previously identified. For example, we found that roles and responsibilities—particularly between the State Department and DOD—were unclear with regard to authority over DOD personnel temporarily assigned to conduct certain program activities in African countries, and DOD officials said that disagreements affected implementation of DOD’s activities in Niger. DOD suspended most of its program activities in Niger in 2007 after the ambassador limited the number of DOD personnel allowed to enter the country. State Department officials said these limits were set in part because of embassy concerns about the country’s fragile political environment as well as limited space and staff available to support DOD personnel deployed to partner countries. At the time of our May 2007 review, we found that the State Department office responsible for coordinating law enforcement agencies’ role in combating terrorism had not developed or implemented an overarching plan to use the combined capabilities of U.S. law enforcement agencies to assist foreign nations to identify, disrupt, and prosecute terrorists. Additionally, the national strategies related to this effort lacked clearly defined roles and responsibilities. In one country we visited for that review, the lack of clear roles and responsibilities led two law enforcement agencies, which were unknowingly working with different foreign law enforcement agencies, to move in on the same subject. According to foreign and U.S. law enforcement officials, such actions may have compromised other investigations. We also reported that because the national strategies related to this effort did not clarify specific roles, among other issues, law enforcement agencies were not being fully used abroad to protect U.S. citizens and interests from future terrorist attacks. In our work on the federal government’s pandemic influenza preparedness efforts, we noted that the Departments of Homeland Security and Health and Human Services share most federal leadership roles in implementing the pandemic influenza strategy and supporting plans; however, we reported that it was not clear how this would work in practice because their roles were unclear. The National Strategy for Pandemic Influenza and its supporting implementation plan described the Secretary of Health and Human Services as being responsible for leading the medical response in a pandemic, while the Secretary of Homeland Security would be responsible for overall domestic incident management and federal coordination. However, since a pandemic extends well beyond health and medical boundaries, to include sustaining critical infrastructure, private- sector activities, the movement of goods and services across the nation and the globe, and economic and security considerations, it is not clear when, in a pandemic, the Secretary of Health and Human Services would be in the lead and when the Secretary of Homeland Security would lead. This lack of clarity on roles and responsibilities could lead to confusion or disagreements among implementing agencies that could hinder interagency collaboration, and a federal response could be slowed as agencies resolve their roles and responsibilities following the onset of a significant outbreak. In March 2008, we reported that DOD and the intelligence community had not developed, agreed upon, or issued a national security space strategy. The United States depends on space assets to support national security activities, among other activities. Reports have long recognized the need for a strategy to guide the national security space community’s efforts in space and better integrate the activities of DOD and the intelligence community. Moreover, Congress found in the past that DOD and the intelligence community may not be well-positioned to coordinate certain intelligence activities and programs to ensure unity of effort and avoid duplication of efforts. We reported that a draft strategy had been developed in 2004, but according to the National Security Space Office Director, the National Security Council requested that the strategy not be issued until the revised National Space Policy directive was released in October 2006. However, once the policy was issued, changes in leadership at the National Reconnaissance Office and Air Force, as well as differences in opinion and organizational differences between the defense and intelligence communities further delayed issuance of the strategy. Until a national security space strategy is issued, the defense and intelligence communities may continue to make independent decisions and use resources that are not necessarily based on national priorities, which could lead to gaps in some areas of space operations and redundancies in others. We testified in March 2009 that as the current administration clarifies its new strategy for Iraq and develops a new comprehensive strategy for Afghanistan, these strategies should incorporate the desirable characteristics we have previously identified. This includes, among other issues, the roles and responsibilities of U.S. government agencies, and mechanisms and approaches for coordinating the efforts of the wide variety of U.S. agencies and international organizations—such as DOD, the Departments of State, the Treasury, and Justice, USAID, the United Nations, and the World Bank—that have significant roles in Iraq and Afghanistan. Clearly defining and coordinating the roles, responsibilities, commitments, and activities of all organizations involved would allow the U.S. government to prioritize the spending of limited resources and avoid unnecessary duplication. In recent years we have issued reports recommending that U.S. government agencies, including DOD, the State Department, and others, develop or revise strategies to incorporate desirable characteristics for strategies for a range of programs and activities including humanitarian and development efforts in Somalia, the Trans-Sahara Counterterrorism Partnership, foreign assistance strategy, law enforcement agencies’ role in assisting foreign nations in combating terrorism, and meeting U.S. national security goals in Pakistan’s Federally Administered Tribal Areas. In commenting on drafts of those reports, agencies generally concurred with our recommendations. Officials from one organization—the National Counterterrorism Center—noted that at the time of our May 2007 report on law enforcement agencies’ role in assisting foreign nations in combating terrorism, it had already begun to implement our recommendations. What steps are agencies taking to develop joint or mutually supportive strategies to guide interagency activities? What obstacles or impediments exist to developing comprehensive strategies or plans that integrate multiple agencies’ efforts? What specific national security challenges would be best served by overarching strategies? Who should be responsible for determining and overseeing these overarching strategies? Who should be responsible for developing the shared outcomes? How will agencies ensure effective implementation of overarching strategies? To what extent do strategies developed by federal agencies clearly identify priorities, milestones, and performance measures to gauge results? What steps are federal agencies taking to ensure coordination of planning and implementation of strategies with state and local governments when appropriate? U.S. government agencies, such as the Department of State, the U.S. Agency for International Development (USAID), and the Department of Defense (DOD), among others, spend billions of dollars annually on various diplomatic, development, and defense missions in support of national security. At a time when our nation faces increased fiscal constraints, it is increasingly important that agencies use their resources efficiently and effectively. Achieving meaningful results in many national security–related interagency efforts requires coordinated efforts among various actors across federal agencies; foreign, state, and local governments; nongovernment organizations; and the private sector. Given the number of agencies involved in U.S. government national security efforts, it is particularly important that there be mechanisms to coordinate across agencies. However, differences in agencies’ structures, processes, and resources can hinder successful collaboration in national security, and adequate coordination mechanisms to facilitate collaboration during national security planning and execution are not always in place. Congress and the administration will need to consider the extent to which agencies’ existing structures, processes, and funding sources facilitate interagency collaboration and whether changes could enhance collaboration. Based on our prior work, organizational differences—including differences in organizational structures, planning processes, and funding sources—can hinder interagency collaboration, resulting in a patchwork of activities that can waste scarce funds and limit the overall effectiveness of federal efforts. Differences in organizational structures can hinder collaboration for national security efforts. Agencies involved in national security activities define and organize their regions differently. For example, DOD’s regional combatant commands and the State Department’s regional bureaus are aligned differently, as shown in figure 1. In addition to regional bureaus, the State Department is organized to interact bilaterally through U.S. embassies located within other countries. As a result of these differing structures, our prior work and that of national security experts has found that agencies must coordinate with a large number of organizations in their regional planning efforts, potentially creating gaps and overlaps in policy implementation and leading to challenges in coordinating efforts among agencies. For example, as the recent report by the Project on National Security Reform noted, U.S. government engagement with the African Union requires two of the State Department’s regional bureaus, one combatant command (however, before October 2008, such efforts would have required coordination with three combatant commands), two USAID bureaus, and the U.S. ambassador to Ethiopia. Similarly, in reporting on the State Department’s efforts to develop a framework for planning and coordinating U.S. reconstruction and stabilization operations, the State Department noted that differences between the organizational structure of civilian agencies and that of the military could make coordination more difficult, as we reported in November 2007. Agencies also have different planning processes that can hinder interagency collaboration efforts. Specifically, in a May 2007 report on interagency planning for stability operations, we noted that some civilian agencies, like the State Department, focus their planning efforts on current operations. In contrast, DOD is required to plan for a wide range of current and potential future operations. Such differences are reflected in their planning processes: we reported that the State Department does not allocate its planning resources in the same way as DOD and, as such, does not have a large pool of planners to engage in DOD’s planning process. We found almost universal agreement among all organizations included in that review—including DOD, the State Department, and USAID—that there needed to be more interagency coordination in planning. However, we have previously reported that civilian agencies generally did not receive military plans for comment as they were developed, which restricted agencies’ ability to harmonize plans. Interagency collaboration during plan development is important to achieving a unified government approach in plans; however, State Department officials told us during our May 2007 review that DOD’s hierarchical approach, which required Secretary of Defense approval to present aspects of plans to the National Security Council for interagency coordination, limited interagency participation in the combatant commands’ plan development and had been a significant obstacle to achieving a unified governmentwide approach in those plans. DOD has taken some steps to involve other agencies in its strategic planning process through U.S. Africa Command. As we reported in February 2009, in developing its theater campaign plan, U.S. Africa Command was one of the first combatant commands to employ DOD’s new planning approach, which called for collaboration among federal agencies to ensure activities are integrated and synchronized in pursuit of common goals. U.S. Africa Command officials met with representatives from 16 agencies at the beginning of the planning process to gain interagency input on its plan. While a nascent process, involving other U.S. government agencies at the beginning of the planning process may result in a better informed plan for DOD’s activities in Africa. Moreover, agencies have different funding sources for national security activities. Funding is budgeted for and appropriated by agency, rather than by functional area (such as national security or foreign aid). The Congressional Research Service reported in December 2008 that because of this agency focus in budgeting and appropriations, there is no forum to debate which resources or combination of resources to apply to efforts, like national security, that involve multiple agencies and, therefore, the President’s budget request and congressional appropriations tend to reflect individual agency concerns. As we have previously testified, the agency-by-agency focus of the budget does not provide for the needed integrated perspective of government performance envisioned by the Government Performance and Results Act. Moreover, we reported in March 2008 that different funding arrangements for defense and national intelligence activities may complicate DOD’s efforts to incorporate intelligence, surveillance, and reconnaissance activities. While DOD develops the defense intelligence budget, some DOD organizations also receive funding through the national intelligence budget, which is developed by the Office of the Director of National Intelligence, to provide support for national intelligence efforts. According to a DOD official, disagreement about equitable funding from each budget led to the initial operating capability date being pushed back 1 year for a new space radar system. In an April 2008 Comptroller General forum on enhancing partnerships for countering transnational terrorism, some participants suggested that funding overall objectives—such as counterterrorism— rather than funding each agency would provide flexibility to allocate funding where it was needed and would have the most effect. Similarly, as part of the national security reform debate, some have recommended instituting budgeting and appropriations processes—with corresponding changes to oversight processes—based on functional areas to better ensure that the U.S. national security strategy aligns with resources available to implement it. Agencies receive different levels of appropriations that are used to fund all aspects of an agency’s operations, to include national security activities. As shown in figure 2, DOD receives significantly more funding than other key agencies involved in national security activities, such as the Departments of State and Homeland Security. As shown in figure 3, DOD also has a significantly larger workforce than other key agencies involved in national security activities. As of the end of fiscal year 2008, DOD reported having 1.4 million active duty military personnel and about 755,000 government employees, while the State Department and Department of Homeland Security reported having almost 31,000 government employees and almost 219,000 government employees and military personnel, respectively. Because of its relatively large size—in terms of appropriations and personnel—DOD has begun to perform more national security–related activities than in the past. For example, as the Congressional Research Service reported in January 2009, the proportion of DOD foreign assistance funded through the State Department has increased from 7 percent of bilateral official development assistance in calendar year 2001 to an estimated 20 percent in 2006, largely in response to stabilization and reconstruction activities in Iraq and Afghanistan. The Secretaries of Defense and State have testified and stated that successful collaboration among civilian and military agencies requires confronting the disparity in resources, including providing greater capacity in the State Department and USAID to allow for effective civilian response and civilian-military partnership. In testimonies in April 2008 and May 2009, the former and current Secretaries of State, respectively, explained that the State Department was taking steps to become more capable and ready to handle reconstruction and development tasks in coordination with DOD. Specifically, former Secretary of State Rice explained that the State Department had redeployed diplomats from European and Washington posts to countries of greater need; sought to increase the size of the diplomatic corps in the State Department and USAID; and was training diplomats for nontraditional roles, especially stabilization and reconstruction activities. Additionally, the current Secretary of State noted in testimonies before two congressional committees that the State Department is working with DOD and will be taking back the resources to do the work that the agency should be leading, but did not elaborate on which activities this included. Enclosure III of this report further discusses the human capital issues related to interagency collaboration for national security. Some agencies have established mechanisms to facilitate interagency collaboration—a critical step in achieving integrated approaches to national security—but challenges remain in collaboration efforts. We have found in our prior work on enhancing interagency collaboration that agencies can enhance and sustain their collaborative efforts by establishing compatible policies, procedures, and other means to operate across agency boundaries, among other practices. Some agencies have established and formalized coordination mechanisms to facilitate interagency collaboration. For example: At the time of our review, DOD’s U.S. Africa Command had undertaken efforts to integrate personnel from other U.S. government agencies into its command structure because the command is primarily focused on strengthening security cooperation with African nations and creating opportunities to bolster the capabilities of African partners, which are activities that traditionally require coordination with other agencies. DOD’s other combatant commands have also established similar coordination mechanisms. National security experts have noted that U.S. Southern Command has been relatively more successful than some other commands in its collaboration efforts and attributed this success, in part, to the command’s long history of interagency operations related to domestic disaster response and counterdrug missions. As we reported in March 2009, an intelligence component of the Drug Enforcement Administration rejoined the intelligence community in 2006 to provide a link to coordinate terrorism and narcotics intelligence with all intelligence community partners. According to a Department of Justice Office of the Inspector General report, intelligence community partners found the Drug Enforcement Administration’s intelligence valuable in their efforts to examine ongoing threats. DOD, State Department, and USAID officials have established processes to coordinate projects related to humanitarian relief and reconstruction funded through the Commander’s Emergency Response Program and Section 1206 program. We reported in June 2008 that Multinational Corps–Iraq guidance required DOD commanders to coordinate Commander’s Emergency Response Program projects with various elements, including local government agencies, civil affairs elements, and Provincial Reconstruction Teams. DOD, State Department, and USAID officials we interviewed for that review said that the presence of the Provincial Reconstruction Teams, as well as embedded teams, had improved coordination among programs funded by these agencies and the officials were generally satisfied with the coordination that was taking place. Similarly, Section 1206 of the National Defense Authorization Act of 2006 gave DOD the authority to spend a portion of its own appropriations to train and equip foreign militaries to undertake counterterrorism and stability operations. The State Department and DOD must jointly formulate all projects and coordinate their implementation and, at the time of our review, the agencies had developed a coordinated process for jointly reviewing and selecting project proposals. We found that coordination in formulating proposals did not occur consistently between DOD’s combatant commands and the State Department’s embassy teams for those projects formulated in fiscal year 2006; however, officials reported better coordination in the formulation of fiscal year 2007 proposals. While some agencies have established mechanisms to enhance collaboration, challenges remain in facilitating interagency collaboration. We have found that some mechanisms are not formalized, may not be fully utilized, or have difficulty gaining stakeholder support, thus limiting their effectiveness in enhancing interagency collaboration. Some mechanisms may be informal. In the absence of formal coordination mechanisms, some agencies have established informal coordination mechanisms; however, by using informal coordination mechanisms, agencies could end up relying on the personalities of officials involved to ensure effective collaboration. At DOD’s U.S. Northern Command, for example, we found that successful collaboration on the command’s homeland defense plan between the command and an interagency planning team was largely based on the dedicated personalities involved and the informal meetings and teleconferences they instituted. In that report we concluded that without institutionalizing the interagency planning structure, efforts to coordinate with agency partners may not continue when personnel move to their next assignments. Some mechanisms may not be fully utilized. While some agencies have put in place mechanisms to facilitate coordination on national security activities, some mechanisms are not always fully utilized. We reported in October 2007 that the industry-specific coordinating councils that the Department of Homeland Security established to be the primary mechanism for coordinating government and private-sector efforts could be better utilized for collaboration on pandemic influenza preparedness. Specifically, we noted that these coordinating councils were primarily used to coordinate in a single area, sharing information across sectors and government, rather than to address a range of other challenges, such as unclear roles and responsibilities between federal and state governments in areas such as state border closures and vaccine distribution. In February 2009, Department of Homeland Security officials informed us that the department was working on initiatives to address potential coordination challenges in response to our recommendation. Some mechanisms have limited support from key stakeholders. While some agencies have implemented mechanisms to facilitate coordination, limited support from stakeholders can hinder collaboration efforts. Our prior work has shown that agencies’ concerns about maintaining jurisdiction over their missions and associated resources can be a significant barrier to interagency collaboration. For example, DOD initially faced resistance from key stakeholders in the creation of the U.S. Africa Command, in part due to concerns expressed by State Department officials that U.S. Africa Command would become the lead for all U.S. government activities in Africa, even though embassies lead decision making on U.S. government noncombat activities conducted in a country. In recent years we have issued reports recommending that the Secretaries of Defense, State, and Homeland Security and the Attorney General take a variety of actions to address creating collaborative organizations, including taking actions to provide implementation guidance to facilitate interagency participation and develop clear guidance and procedures for interagency efforts, develop an approach to overcome differences in planning processes, create coordinating mechanisms, and clarify roles and responsibilities. In commenting on drafts of those reports, agencies generally concurred with our recommendations. In some cases, agencies identified planned actions to address the recommendations. For example, in our April 2008 report on U.S. Northern Command’s plans, we recommended that clear guidance be developed for interagency planning efforts and DOD stated that it had begun to incorporate such direction in its major planning documents and would continue to expand on this guidance in the future. What processes, including internal agency processes, are hindering further interagency collaboration and what changes are needed to address these challenges? What are the benefits of and barriers to instituting a function-based budgeting and appropriations process? What resources or authorities are needed to further support integrated or mutually supportive activities across agencies? What steps are being taken to create or utilize structures or mechanisms to develop integrated or mutually supportive plans and activities? What is the appropriate role for key agencies in various national security– related activities? What strategies might Congress and agencies use to address challenges presented by the various funding sources? As the threats to national security have evolved over the past decades, so have the skills needed to prepare for and respond to those threats. To effectively and efficiently address today’s national security challenges, federal agencies need a qualified, well-trained workforce with the skills and experience that can enable them to integrate the diverse capabilities and resources of the U.S. government. However, federal agencies do not always have the right people with the right skills in the right jobs at the right time to meet the challenges they face, to include having a workforce that is able to deploy quickly to address crises. Moreover, personnel often lack knowledge of the processes and cultures of the agencies with which they must collaborate. To help federal agencies develop a workforce that can enhance collaboration in national security, Congress and the administration may need to consider legislative and administrative changes needed to build personnel capacities, enhance personnel systems to promote interagency efforts, expand training opportunities, and improve strategic workforce planning, thereby enabling a greater ability to address national security in a more integrated manner. Collaborative approaches to national security require a well-trained workforce with the skills and experience to integrate the government’s diverse capabilities and resources, but some federal government agencies may lack the personnel capacity to fully participate in interagency activities. When we added strategic human capital management to our governmentwide high-risk list in 2001, we explained that “human capital shortfalls are eroding the ability of many agencies—and threatening the ability of others—to effectively, efficiently, and economically perform their missions.” We also have reported that personnel shortages can threaten an organization’s ability to perform missions efficiently and effectively. Moreover, some agencies also lack the capacity to deploy personnel rapidly when the nation’s leaders direct a U.S. response to crises. As a result, the initial response to a crisis could rely heavily on the deployment of military forces and require military forces to conduct missions beyond their core areas of expertise. Some federal government agencies have taken steps to improve their capacity to participate in interagency activities. For example, in response to a presidential directive and a State Department recommendation to provide a centralized, permanent civilian capacity for planning and coordinating the civilian response to stabilization and reconstruction operations, the State Department has begun establishing three civilian response entities to act as first responders to international crises. Despite these efforts, we reported in November 2007 that the State Department has experienced difficulties in establishing permanent positions and recruiting for one of these entities, the Active Response Corps. Similarly, we also reported that other agencies that have begun to develop a stabilization and reconstruction response capacity, such as the U.S. Agency for International Development (USAID) and the Department of the Treasury, have limited numbers of staff available for rapid responses to overseas crises. Moreover, some federal government agencies are experiencing personnel shortages that have impeded their ability to participate in interagency activities. For example, in February 2009 we reported that the Department of Defense’s (DOD) U.S. Africa Command was originally intended to have significant interagency representation, but that of the 52 interagency positions DOD approved for the command, as of October 2008 only 13 of these positions had been filled with experts from the State, Treasury, and Agriculture Departments; USAID; and other federal government agencies. Embedding personnel from other federal agencies was considered essential by DOD because these personnel would bring knowledge of their home agencies into the command, which was expected to improve the planning and execution of the command’s programs and activities and stimulate collaboration among U.S. government agencies. However, U.S. Africa Command has had limited interagency participation due in part to personnel shortages in agencies like the State Department, which initially could only staff 2 of the 15 positions requested by DOD because the State Department faced a 25 percent shortfall in mid-level personnel. In addition, in November 2007 we reported that the limited number of personnel that other federal government agencies could offer hindered efforts to include civilian agencies into DOD planning and exercises. Furthermore, some interagency coordination efforts have been impeded because agencies have been reluctant to detail staff to other organizations or deploy them overseas for interagency efforts due to concerns that the agency may be unable to perform its work without these employees. For example, we reported in October 2007 that in the face of resource constraints, officials in 37 state and local government information fusion centers—collaborative efforts intended to detect, prevent, investigate, and respond to criminal and terrorist activity—said they encountered challenges with federal, state, and local agencies not being able to detail personnel to their fusion center. Fusion centers rely on such details to staff the centers and enhance information sharing with other state and local agencies. An official at one fusion center said that, because of already limited resources in state and local agencies, it was challenging to convince these agencies to contribute personnel to the center because they viewed doing so as a loss of resources. Moreover, we reported in November 2007 that the State Department’s Office of the Coordinator for Reconstruction and Stabilization had difficulty getting the State Department’s other units to release Standby Response Corps volunteers to deploy for interagency stabilization and reconstruction operations because the home units of these volunteers did not want to become short-staffed or lose high-performing staff to other operations. In the same report, we also found that other agencies reported a reluctance to deploy staff overseas or establish on-call units to support interagency stabilization and reconstruction operations because doing so would leave fewer workers available to complete the home offices’ normal work requirements. In addition to the lack of personnel, many national security experts argue that federal government agencies do not have the necessary capabilities to support their national security roles and responsibilities. For example, in September 2009, we reported that 31 percent of the State Department’s Foreign Service generalists and specialists in language-designated positions worldwide did not meet both the language speaking and reading proficiency requirements for their positions as of October 2008, up from 29 percent in 2005. To meet these language requirements, we reported that the State Department efforts include a combination of language training, special recruitment incentives for personnel with foreign language skills, and bonus pay to personnel with proficiency in certain languages, but the department faces several challenges to these efforts, particularly staffing shortages that limit the “personnel float” needed to allow staff to take language training. Similarly, we reported in September 2008 that USAID officials at some overseas missions told us that they did not receive adequate and timely acquisition and assistance support at times, in part because the numbers of USAID staff were insufficient or because the USAID staff lacked necessary competencies. National security experts have expressed concerns that unless the full range of civilian and military expertise and capabilities are effective and available in sufficient capacity, decision makers will be unable to manage and resolve national security issues. In the absence of sufficient personnel, some agencies have relied on contractors to fill roles that traditionally had been performed by government employees. As we explained in October 2008, DOD, the State Department, and USAID have relied extensively on contractors to support troops and civilian personnel and to oversee and carry out reconstruction efforts in Iraq and Afghanistan. While the use of contractors to support U.S. military operations is not new, the number of contractors and the work they were performing in Iraq and Afghanistan represent an increased reliance on contractors to carry out agency missions. Moreover, as agencies have relied more heavily on contractors to provide professional, administrative, and management support services, we previously reported that some agencies had hired contractors for sensitive positions in reaction to a shortfall in the government workforce rather than as a planned strategy to help achieve an agency mission. For example, our prior work has shown that DOD relied heavily on contractor personnel to augment its in-house workforce. In our March 2008 report on defense contracting issues, we reported that in 15 of the 21 DOD offices we reviewed, contractor personnel outnumbered DOD personnel and constituted as much as 88 percent of the workforce. While use of contractors provides the government certain benefits, such as increased flexibility in fulfilling immediate needs, we and others have raised concerns about the federal government’s services contracting. These concerns include the risk of paying more than necessary for work, the risk of loss of government control over and accountability for policy and program decisions, the potential for improper use of personal services contracts, and the increased potential for conflicts of interest. Given the limited civilian capacity, DOD has tended to become the default responder to international and domestic events, although DOD does not always have all of the needed expertise and capabilities possessed by other federal government agencies. For example, we reported in May 2007 that DOD was playing an increased role in stability operations activities, an area that DOD directed be given priority on par with combat operations in November 2005. These activities required the department to employ an increasing number of personnel with specific skills and capabilities, such as those in civil affairs and psychological operations units. However, we found that DOD had encountered challenges in identifying stability operations capabilities and had not yet systematically identified and prioritized the full range of needed capabilities. While the services were each pursuing efforts to improve current capabilities, such as those associated with civil affairs and language skills, we stated that these initiatives may not reflect the comprehensive set of capabilities that would be needed to effectively accomplish stability operations in the future. Since then, DOD has taken steps to improve its capacity to develop and maintain capabilities and skills to perform tasks such as stabilization and reconstruction operations. For example, in June 2009, we noted the increased emphasis that DOD has placed on improving the foreign language and regional proficiency of U.S. forces. In February 2009, the Secretary of Defense acknowledged that the military and civilian elements of the United States’ national security apparatus have grown increasingly out of balance, and he attributed this problem to a lack of civilian capacity. The 2008 National Defense Strategy notes that greater civilian participation is necessary both to make military operations successful and to relieve stress on the military. However, national security experts have noted that while rhetoric about the importance of nonmilitary capabilities has grown, funding and capabilities have remained small compared to the challenge. As a result, some national security experts have expressed concern that if DOD continues in this default responder role, it could lead to the militarization of foreign policy and may exacerbate the lack of civilian capacity. Similarly, we reported in February 2009 that State Department and USAID officials, as well as many nongovernmental organizations, believed that the creation of the U.S. Africa Command could blur the traditional boundaries among diplomacy, development, and defense, regardless of DOD’s intention that this command support rather than lead U.S. efforts in Africa, thereby giving the perception of militarizing foreign policy and aid. Agencies’ personnel systems do not always facilitate interagency collaboration, with interagency assignments often not being considered career-enhancing or recognized in agency performance management systems, which could diminish agency employees’ interest in serving in interagency efforts. For example, in May 2007 we reported that the Federal Bureau of Investigation (FBI) had difficulty filling permanent overseas positions because the FBI did not provide career rewards and incentives to agents or develop a culture that promoted the importance and value of overseas duty. As a result, permanent FBI positions were either unfilled or staffed with nonpermanent staff on temporary, short-term rotations, which limited the FBI’s ability to collaborate with foreign nations to identify, disrupt, and prosecute terrorists. At the time of that review, the FBI had just begun to implement career incentives to encourage staff to volunteer for overseas duty, but we were unable to assess the effect of these incentives on staffing problems because the incentives had just been implemented. Moreover, in June 2009 we reviewed compensation policies for six agencies that deployed civilian personnel to Iraq and Afghanistan, and reported that variations in policies for such areas as overtime rate, premium pay eligibility, and deployment status could result in monetary differences of tens of thousands of dollars per year. OPM acknowledged that laws and agency policy could result in federal government agencies paying different amounts of compensation to deployed civilians at equivalent pay grades who are working under the same conditions and facing the same risks. In addition, we previously identified reinforcing individual accountability for collaborative efforts through agency performance management systems as a key practice that can help enhance and sustain collaboration among federal agencies. However, our prior work has shown that assignments that involve collaborating with other agencies may not be rewarded. For example, in April 2009 we reported that officials from the Departments of Commerce, Energy, Health and Human Services, and the Treasury stated that providing support for State Department foreign assistance program processes creates an additional workload that is neither recognized by their agencies nor included as a factor in their performance ratings. Furthermore, agency personnel systems may not readily facilitate assigning personnel from one agency to another, which could hinder interagency collaboration. For example, we testified in July 2008 that, according to DOD officials, personnel systems among federal agencies were incompatible, which did not readily facilitate the assignment of non-DOD personnel into the new U.S. Africa Command. Increased training opportunities and focusing on strategic workforce planning efforts are two tools that could facilitate federal agencies’ ability to fully participate in interagency collaboration activities. We have previously testified that agencies need to have effective training and development programs to address gaps in the skills and competencies that they identified in their workforces. Training and developing personnel to fill new and different roles will play a crucial part in the federal government’s endeavors to meet its transformation challenges. Some agencies have ongoing efforts to educate senior leaders about the importance of interagency collaboration. For example, we reported in February 2009 that DOD’s 2008 update to its civilian human capital strategic plan identifies the need for senior leaders to understand interagency roles and responsibilities as a necessary leadership capability. We explained that DOD’s new Defense Senior Leader Development Program focuses on developing senior leaders to excel in the 21st century’s joint, interagency, and multinational environment and supports the governmentwide effort to foster interagency cooperation and information sharing. Training can help personnel develop the skills and understanding of other agencies’ capabilities needed to facilitate interagency collaboration. A lack of understanding of other agencies’ cultures, processes, and core capabilities can hamper U.S. national security partners’ ability to work together effectively. However, civilian professionals have had limited opportunities to participate in interagency training or education opportunities. For example, we reported in November 2007 that the State Department did not have the capacity at that time to ensure that its Standby Response Corps volunteers were properly trained for participating in stabilization and reconstruction operations because the Foreign Service Institute did not have the capacity to train the 1,500 new volunteers the State Department planned to recruit in 2009. Efforts such as the National Security Professional Development Program, an initiative launched in May 2007, are designed to provide the training necessary to improve the ability of U.S. government personnel to address a range of interagency issues. When it is fully established and implemented, this program is intended to use intergovernmental training and professional education to provide national security professionals with a breadth and depth of knowledge and skills in areas common to international and homeland security. It is intended to educate national security professionals in capabilities such as collaborating with other agencies, and planning and managing interagency operations. A July 2008 Congressional Research Service report stated that many officials and observers have contended that legislation would be necessary to ensure the success of any interagency career development program because, without the assurance that a program would continue into the future, individuals might be less likely to risk the investment of their time, and agencies might be less likely to risk the investment of their resources. Some national security experts say that implementation of the program has lagged, but that the program could be reenergized with high-level attention. The Executive Director of the National Security Professional Development Integration Office testified in April 2009 that the current administration is in strong agreement with the overall intent for the program and was developing a way ahead to build on past successes while charting new directions where necessary. Agencies also can use strategic workforce planning as a tool to support their efforts to secure the personnel resources needed to collaborate in interagency missions. In our prior work, we have found that tools like strategic workforce planning and human capital strategies are integral to managing resources as they enable an agency to define staffing levels, identify critical skills needed to achieve its mission, and eliminate or mitigate gaps between current and future skills and competencies. In designating strategic human capital management as a governmentwide high-risk area in 2001, we explained that it is critically important that federal agencies put greater focus on workforce planning and take the necessary steps to build, sustain, and effectively deploy the skilled, knowledgeable, diverse, and performance-oriented workforce needed to meet the current and emerging needs of government and its citizens. Strategic human capital planning that is integrated with broader organizational strategic planning is critical to ensuring agencies have the talent they need for future challenges, which may include interagency collaboration. Without integrating strategic human capital planning with broader organizational strategic planning, agencies may lose experienced staff and talent. For example, in July 2009 we reported that the State Department could not determine whether it met its objective of retaining experienced staff while restructuring its Arms Control and Nonproliferation Bureaus because there were no measurable goals for retention of experienced staff. As a result, some offices affected by the restructuring experienced significant losses in staff expertise. Additionally, in March 2007 we testified that one of the critical needs addressed by strategic workforce planning is developing long-term strategies for acquiring, developing, motivating, and retaining staff to achieve programmatic goals. We also stated that agencies need to strengthen their efforts and use of available flexibilities to acquire, develop, motivate, and retain talent to address gaps in talent due to changes in the knowledge, skills, and competencies in occupations needed to meet their missions. For example, in September 2008 we reported that USAID lacked the capacity to develop and implement a strategic acquisition and assistance workforce plan that could enable the agency to better match staff levels to changing workloads because it had not collected comprehensive information on the competencies—including knowledge, skills, abilities, and experience levels—of its overseas acquisition and assistance specialists. We explained that USAID could use this information to better identify its critical staffing needs and adjust its staffing patterns to meet those needs and address workload imbalances. Furthermore, in December 2005 we reported that the Office of the U.S. Trade Representative, a small trade agency that receives support from other larger agencies (e.g., the Departments of Commerce, State, and Agriculture) in doing its work, did not formally discuss or plan human capital resources at the interagency level, even though it must depend on the availability of these critical resources to achieve its mission. Such interagency planning also would facilitate human capital planning by the other agencies that work with the Office of the U.S. Trade Representative, which stated that potential budget cuts could result in fewer resources being available to support the trade agency. As a result, since the Office of the U.S. Trade Representative did not provide the other agencies with specific resource requirements when the agencies were planning, it shifted the risk to the other agencies of having to later ensure the availability of staff in support of the trade agenda, potentially straining their ability to achieve other agency missions. In recent years we have recommended that the Secretaries of State and Defense, the Administrator of USAID, and the U.S. Trade Representative take a variety of actions to address the human capital issues discussed above, such as staffing shortfalls, training, and strategic planning. Specifically, we have made recommendations to develop strategic human capital management systems and undertake strategic human capital planning, include measurable goals in strategic plans, identify the appropriate mix of contractor and government employees needed and develop plans to fill those needs, seek formal commitments from contributing agencies to provide personnel to meet interagency personnel requirements, develop alternative ways to obtain interagency perspectives in the event that interagency personnel cannot be provided due to resource limitations, develop and implement long-term workforce management plans, and implement a training program to ensure employees develop and maintain needed skills. In commenting on drafts of those reports, agencies generally concurred with our recommendations. In some cases, agencies identified planned actions to address the recommendations. For example, in our April 2009 report on foreign aid reform, we recommended that the State Department develop a long-term workforce management plan to periodically assess its workforce capacity to manage foreign assistance. The State Department noted in its comments that it concurs with the idea of further improving employee skill sets and would work to encourage and implement further training. What incentives are needed to encourage agencies to share personnel with other agencies? How can agencies overcome cultural differences to enhance collaboration to achieve greater unity of effort? How can agencies expand training opportunities for integrating civilian and military personnel? What changes in agency personnel systems are needed to address human capital challenges that impede agencies’ ability to properly staff interagency collaboration efforts? What incentives are needed to encourage employees in national security agencies to seek interagency experience, training, and work opportunities? How can agencies effectively meet their primary missions and support interagency activities in light of the resource constraints they face? How can agencies increase staffing of interagency functions across the national security community? What are the benefits and drawbacks to enacting legislation to support the National Security Professional Development Program? What legislative changes might enable agencies to develop a workforce that can enhance collaboration in national security activities? The government’s single greatest failure preceding the September 11, 2001, attacks was the inability of federal agencies to effectively share information about suspected terrorists and their activities, according to the Vice Chair of the 9/11 Commission. As such, sharing and integrating national security information among federal, state, local, and private- sector partners is critical to assessing and responding to current threats to our national security. At the same time, agencies must balance the need to share information with the need to protect it from widespread access. Since January 2005, we have designated information sharing for homeland security as high risk because the government has faced serious challenges in analyzing key information and disseminating it among federal, state, local, and private-sector partners in a timely, accurate, and useful way. Although federal, state, local, and private-sector partners have made progress in sharing information, challenges still remain in sharing, as well as accessing, managing, and integrating information. Congress and the administration will need to ensure that agencies remain committed to sharing relevant national security information, increasing access to necessary information, and effectively managing and integrating information across multiple agencies. Our prior work has shown that agencies do not always share relevant information with their national security partners, including other federal government agencies, state and local governments, and the private sector. Information is a crucial tool in addressing national security issues and its timely dissemination is absolutely critical for maintaining national security. Information relevant to national security includes terrorism- related information, drug intelligence, and planning information for interagency operations. As a result of the lack of information sharing, federal, state, and local governments may not have all the information they need to analyze threats and vulnerabilities. More than 8 years after 9/11, federal, state, and local governments, and private-sector partners are making progress in sharing terrorism-related information. For example, we reported in October 2007 that most states and many local governments had established fusion centers— collaborative efforts to detect, prevent, investigate, and respond to criminal and terrorist activity—to address gaps in information sharing. In addition, in October 2008 we reported that the Department of Homeland Security was replacing its information-sharing system with a follow-on system. In our analysis of the follow-on system, however, we found that the Department of Homeland Security had not fully defined requirements or ways to better manage risks for the next version of its information- sharing system. Additionally, in January 2009 we reported that the Department of Homeland Security was implementing an information- sharing policy and governance structure to improve how it collects, analyzes, and shares homeland security information across the department and with state and local partners. Based on our prior work, we identified four key reasons that agencies may not always share all relevant information with their national security partners. Concerns about agencies’ ability to protect shared information or use that information properly. Since national security information is sensitive by its nature, agencies and private-sector partners are sometimes hesitant to share information because they are uncertain if that information can be protected by the recipient or will be used properly. For example, in March 2006, we reported that Department of Homeland Security officials expressed concerns about sharing terrorism-related information with state and local partners because such information had occasionally been posted on public Internet sites or otherwise compromised. Similarly, in April 2006, we reported that private-sector partners were reluctant to share critical-infrastructure information—such as information on banking and financial institutions, energy production, and telecommunications networks—due to concerns on how the information would be used and the ability of other agencies to keep that information secure. Cultural factors or political concerns. Agencies may not share information because doing so may be outside their organizational cultures or because of political concerns, such as exposing potential vulnerabilities within the agency. As we noted in enclosure II of this report, we stated in a May 2007 report on interagency planning for stability operations that State Department officials told us that the Department of Defense’s (DOD) hierarchical approach to sharing military plans, which required Secretary of Defense approval to present aspects of plans to the National Security Council for interagency coordination, limited interagency participation in the combatant commands’ plan development and had been a significant obstacle to achieving a unified governmentwide approach in those plans. Moreover, in our September 2009 report on DOD’s U.S. Northern Command’s (NORTHCOM) exercise program, we noted that inconsistencies with how NORTHCOM involved states in planning, conducting, and assessing exercises occurred in part because NORTHCOM officials lacked experience in dealing with the differing emergency management structures, capabilities, and needs of the states. Additionally, in our April 2008 report on NORTHCOM’s coordination with state governments, we noted that the legal and historical limits of the nation’s constitutional federal-state structure posed a unique challenge for NORTHCOM in mission preparation. That is, NORTHCOM may need to assist states with civil support, which means that NORTHCOM must consider the jurisdictions of 49 state governments and the District of Columbia when planning its missions. NORTHCOM found that some state and local governments were reluctant to share their emergency response plans with NORTHCOM for fear that DOD would “grade” their plans or publicize potential capability gaps, with an accompanying political cost. Lack of clear guidelines, policies, or agreements for coordinating with other agencies. Agencies have diverse requirements and practices for protecting their information, and thus may not share information without clearly defined guidelines, policies, or agreements for doing so. We reported in April 2008 that NORTHCOM generally was not familiar with state emergency response plans because there were no guidelines for gaining access to those plans. As a result, NORTHCOM did not know what state capabilities existed, increasing the risk that NORTHCOM may not be prepared with the resources needed to respond to homeland defense and civil support operations. We also reported in March 2009 about the lack of information sharing between the Drug Enforcement Administration (DEA) and Immigration and Customs Enforcement (ICE). Since 9/11, DEA has supported U.S. counterterrorism efforts by prioritizing drug-trafficking cases linked to terrorism. DEA partners with federal, state, and local agencies—including ICE—to leverage counternarcotics resources. However, at the time of that review, ICE did not fully participate in two multiagency intelligence centers and did not share all of its drug-related intelligence with DEA. In one center, ICE did not participate because they did not have an agreement on the types of data ICE would provide and how sensitive confidential source information would be safeguarded. Without ICE’s drug-related intelligence, DEA could not effectively target major drug-trafficking organizations due to the potential for overlapping investigations and officer safety concerns. Security clearance issues. Agencies often have different ways of classifying information and different security clearance requirements and procedures that pose challenges to effective information sharing across agencies. In some cases, some national security partners do not have the clearances required to access national security information. Specifically, we reported in May 2007 that non-DOD personnel could not access some DOD planning documents or participate in planning sessions because they may not have had the proper security clearances, hindering interagency participation in the development of military plans. Additionally, in October 2007 we reported that some state and local fusion center officials cited that the length of time needed to obtain clearances and the lack of reciprocity, whereby an agency did not accept a clearance granted by another agency, prevented employees from accessing necessary information to perform their duties. In other cases, access to classified information can be limited by one partner, which can hinder integrated national security efforts. For example, we reported that DOD established the National Security Space Office to integrate efforts between DOD and the National Reconnaissance Office, a defense intelligence agency jointly managed by the Secretary of Defense and the Director of National Intelligence. However, in 2005, the National Reconnaissance Office Director withdrew full access to a classified information-sharing network from the National Security Space Office, which inhibited efforts to further integrate defense and national space activities, including intelligence, surveillance, and reconnaissance activities. When agencies do share information, managing and integrating information from multiple sources presents challenges regarding redundancies in information sharing, unclear roles and responsibilities, and data comparability. As the Congressional Research Service reported in January 2008, one argument for fusing a broader range of data, including nontraditional data sources, is to help create a more comprehensive threat picture. The 9/11 Commission Report stated that because no one agency or organization holds all relevant information, information from all relevant sources needs to be integrated in order to “connect the dots.” Without integration, agencies may not receive all relevant information. Some progress had been made in managing and integrating information from multiple agencies by streamlining usage of the “sensitive but unclassified” designation. In March 2006, we reported that the large number of sensitive but unclassified designations used to protect mission- critical information and a lack of consistent policies for their use created difficulties in sharing information by potentially restricting material unnecessarily or disseminating information that should be restricted. We subsequently testified in July 2008 that the President had adopted “controlled unclassified information” to be the single categorical designation for sensitive but unclassified information throughout the executive branch and outlined a framework for identifying, marking, safeguarding, and disseminating this information. As we testified, more streamlined definition and consistent application of policies for designating “controlled but unclassified information” may help reduce difficulties in sharing information; however, monitoring agencies’ compliance will help ensure that the policy is employed consistently across the federal government. Based on our previous work, we identified three challenges posed by managing and integrating information drawn from multiple sources. Redundancies when integrating information. Identical or similar types of information are collected by or submitted to multiple agencies, so integrating or sharing this information can lead to redundancies. For example, we reported in October 2007 that in intelligence fusion centers, multiple information systems created redundancies of information that made it difficult to discern what was relevant. As a result, end users were overwhelmed with duplicative information from multiple sources. Similarly, we reported in December 2008 that in Louisiana, reconstruction project information had to be repeatedly resubmitted separately to state and Federal Emergency Management Agency officials during post– Hurricane Katrina reconstruction efforts because the system used to track project information did not facilitate the exchange of documents. Information was sometimes lost during this exchange, requiring state officials to resubmit the information, creating redundancies and duplication of effort. As a result, reconstruction efforts in Louisiana were delayed. Unclear roles and responsibilities. Agency personnel may be unclear about their roles and responsibilities in the information-sharing process, which may impede information-sharing efforts. For example, we reported in April 2005 that officials in Coast Guard field offices did not clearly understand their role in helping nonfederal employees through the security clearance process. Although Coast Guard headquarters officials requested that Coast Guard field officials submit the names of nonfederal officials needing a security clearance, some Coast Guard field officials did not clearly understand that they were responsible for contacting nonfederal officials about the clearance process and thought that Coast Guard headquarters was processing security clearances for nonfederal officials. As a result of this misunderstanding, nonfederal employees did not receive their security clearances in a timely manner and could not access important security-related information that could have aided them in identifying or deterring illegal activities. Data may not be comparable across agencies. Agencies’ respective missions drive the types of data they collect, and so data may not be comparable across agencies. For example, we reported in October 2008 that biometric data, such as fingerprints and iris images, collected in DOD field activities such as those in Iraq and Afghanistan, were not comparable with data collected by other units or with large federal databases that store biometric data, such as the Department of Homeland Security biometric database or the Federal Bureau of Investigation (FBI) fingerprint database. For example, if a unit collects only iris images, this data cannot be used to match fingerprints collected by another unit or agency, such as in the FBI fingerprint database. A lack of comparable data, especially for use in DOD field activities, prevents agencies from determining whether the individuals they encounter are friend, foe, or neutral, and may put forces at risk. Since 2005, we have recommended that the Secretaries of Defense, Homeland Security, and State establish or clarify guidelines, agreements, or procedures for sharing a wide range of national security information, such as planning information, terrorism-related information, and reconstruction project information. We have recommended that such guidelines, agreements, and procedures define and communicate how shared information will be protected; include provisions to involve and obtain information from nonfederal partners in the planning process; ensure that agencies fully participate in interagency information-sharing efforts; identify and disseminate practices to facilitate more effective communication among federal, state, and local agencies; clarify roles and responsibilities in the information-sharing process; and establish baseline standards for data collecting to ensure comparability across agencies. In commenting on drafts of those reports, agencies generally concurred with our recommendations. In some cases, agencies identified planned actions to address the recommendations. For example, in our December 2008 report on the Federal Emergency Management Agency’s public assistance grant program, we recommended that the Federal Emergency Management Agency improve information sharing within the public assistance process by identifying and disseminating practices that facilitate more effective communication among federal, state, and local entities. In comments on a draft of the report, the Federal Emergency Management Agency generally concurred with the recommendation and noted that it was making a concerted effort to improve collaboration and information sharing within the public assistance process. Moreover, agencies have implemented some of our past recommendations. For example, in our April 2006 report on protecting and sharing critical infrastructure information, we recommended that the Department of Homeland Security define and communicate to the private sector what information is needed and how the information would be used. The Department of Homeland Security concurred with our recommendation and, in response, has made available, through its public Web site, answers to frequently asked questions that define the type of information collected and what it is used for, as well as how the information will be accessed, handled, and used by federal, state, and local government employees and their contractors. Oversight Questions What steps are needed to develop and implement interagency protocols What steps are being taken to promote access to relevant databases? for sharing information? How do agencies balance the need to keep information secure and the need to share information to maximize interagency efforts? How can agencies encourage effective information sharing? What are ways in which the security clearance process can be streamlined and security clearance reciprocity among agencies can be ensured? In addition, the following staff contributed to the report: John H. Pendleton, Director; Marie Mak, Assistant Director; Hilary Benedict; Cathleen Berrick; Renee Brown; Leigh Caraher; Grace Cho; Joe Christoff; Elizabeth Curda; Judy McCloskey; Lorelei St. James; and Bernice Steinhardt. Military Training: DOD Needs a Strategic Plan and Better Inventory and Requirements Data to Guide Development of Language Skills and Regional Proficiency. GAO-09-568. Washington, D.C.: June 19, 2009. Influenza Pandemic: Continued Focus on the Nation’s Planning and Preparedness Efforts Remains Essential. GAO-09-760T. Washington, D.C.: June 3, 2009. U.S. Public Diplomacy: Key Issues for Congressional Oversight. GAO-09-679SP. Washington, D.C.: May 27, 2009. Military Operations: Actions Needed to Improve Oversight and Interagency Coordination for the Commander’s Emergency Response Program in Afghanistan. GAO-09-61. Washington, D.C.: May 18, 2009. Foreign Aid Reform: Comprehensive Strategy, Interagency Coordination, and Operational Improvements Would Bolster Current Efforts. GAO-09-192. Washington, D.C.: April 17, 2009. Iraq and Afghanistan: Security, Economic, and Governance Challenges to Rebuilding Efforts Should Be Addressed in U.S. Strategies. GAO-09-476T. Washington, D.C.: March 25, 2009. Drug Control: Better Coordination with the Department of Homeland Security and an Updated Accountability Framework Can Further Enhance DEA’s Efforts to Meet Post-9/11 Responsibilities. GAO-09-63. Washington, D.C.: March 20, 2009. Defense Management: Actions Needed to Address Stakeholder Concerns, Improve Interagency Collaboration, and Determine Full Costs Associated with the U.S. Africa Command. GAO-09-181. Washington, D.C.: February 20, 2009. Combating Terrorism: Actions Needed to Enhance Implementation of Trans-Sahara Counterterrorism Partnership. GAO-08-860. Washington, D.C.: July 31, 2008. Information Sharing: Definition of the Results to Be Achieved in Terrorism-Related Information Sharing Is Needed to Guide Implementation and Assess Progress. GAO-08-637T. Washington, D.C.: July 23, 2008. Highlights of a GAO Forum: Enhancing U.S. Partnerships in Countering Transnational Terrorism. GAO-08-887SP. Washington, D.C.: July 2008. Stabilization and Reconstruction: Actions Are Needed to Develop a Planning and Coordination Framework and Establish the Civilian Reserve Corps. GAO-08-39. Washington, D.C.: November 6, 2007. Homeland Security: Federal Efforts Are Helping to Alleviate Some Challenges Encountered by State and Local Information Fusion Centers. GAO-08-35. Washington, D.C.: October 30, 2007. Military Operations: Actions Needed to Improve DOD’s Stability Operations Approach and Enhance Interagency Planning. GAO-07-549. Washington, D.C.: May 31, 2007. Combating Terrorism: Law Enforcement Agencies Lack Directives to Assist Foreign Nations to Identify, Disrupt, and Prosecute Terrorists. GAO-07-697. Washington, D.C.: May 25, 2007. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005.
While national security activities, which range from planning for an influenza pandemic to Iraq reconstruction, require collaboration among multiple agencies, the mechanisms used for such activities may not provide the means for interagency collaboration needed to meet modern national security challenges. To assist the 111th Congress and the new administration in developing their oversight and management agendas, this report, which was performed under the Comptroller General's authority, addresses actions needed to enhance interagency collaboration for national security activities: (1) the development and implementation of overarching, integrated strategies; (2) the creation of collaborative organizations; (3) the development of a well-trained workforce; and (4) the sharing and integration of national security information across agencies. This report is based largely on a body of GAO work issued since 2005. Based on prior work, GAO has found that agencies need to take the following actions to enhance interagency collaboration for national security: Develop and implement overarching strategies. Although some U.S. government agencies have developed or updated overarching strategies on national security issues, GAO has reported that in some cases, such as U.S. government efforts to improve the capacity of Iraq's ministries to govern, U.S. efforts have been hindered by multiple agencies pursuing individual efforts without an overarching strategy. In particular, a strategy defining organizational roles and responsibilities and coordination mechanisms can help agencies clarify who will lead or participate in activities, organize their joint and individual efforts, and facilitate decision making. Create collaborative organizations. Organizational differences--including differences in agencies' structures, planning processes, and funding sources--can hinder interagency collaboration, potentially wasting scarce funds and limiting the effectiveness of federal efforts. For example, defense and national intelligence activities are funded through separate budgets. Disagreement about funding from each budget led to the initial operating capability date being pushed back 1 year for a new space radar system. Coordination mechanisms are not always formalized or not fully utilized, potentially limiting their effectiveness in enhancing interagency collaboration. Develop a well-trained workforce. Collaborative approaches to national security require a well-trained workforce with the skills and experience to integrate the government's diverse capabilities and resources, but some federal government agencies lack the personnel capacity to fully participate in interagency activities. Some federal agencies have taken steps to improve their capacity to participate in interagency activities, but personnel shortages have impeded agencies' ability to participate in these activities, such as efforts to integrate personnel from other federal government agencies into the Department of Defense's (DOD) new U.S. Africa Command. Increased training opportunities and strategic workforce planning efforts could facilitate federal agencies' ability to fully participate in interagency collaboration activities. Share and integrate national security information across agencies. Information is a crucial tool in national security and its timely dissemination is critical for maintaining national security. However, despite progress made in sharing terrorism-related information, agencies and private-sector partners do not always share relevant information with their national security partners due to a lack of clear guidelines for sharing information and security clearance issues. For example, GAO found that non-DOD personnel could not access some DOD planning documents or participate in planning sessions because they may not have had the proper security clearances. Additionally, incorporating information drawn from multiple sources poses challenges to managing and integrating that information.
The federal government intervention and involvement in the financial markets was created through a number of existing and recently enacted laws. This legal framework provided the financial resources for assistance, the federal government’s authorities, and the restrictions companies were required to comply with in exchange for the financial assistance. In assisting the public to understand its involvement in the companies, in May 2009 the administration published a set of core principles that are to guide the government’s management of ownership interests in private firms. Most of the institutions that the government had or has an ownership interest in are regulated by one of several financial regulators, which have a role in overseeing the financial condition and operations of its regulated entities. The federal government’s efforts in late 2008 to stabilize the financial markets are not its first intervention in private markets during economic downturns. The government has previously undertaken large-scale financial assistance efforts, including to private companies. For example, in the 1970s and early 1980s Congress created separate financial assistance programs totaling more than $12 billion to stabilize Conrail, Lockheed- Martin, and Chrysler, with most of the funds being distributed in the form of loans or loan guarantees. Most recently, in response to the most severe financial crisis since the Great Depression, Congress provided Treasury additional authority to stabilize the financial system. In particular: In July 2008, Congress passed the Housing and Economic Recovery Act of 2008 (HERA), which established FHFA—the agency responsible for the monitoring of safety and soundness and the housing missions of the Enterprises and the other housing government-sponsored enterprises, namely, the Federal Home Loan Banks—and among other things, provided for expanded authority to place the Enterprises in conservatorship or receivership and provides Treasury with certain authorities to provide financial support to the Enterprises. In accordance with HERA, on September 6, 2008, FHFA placed the Enterprises into conservatorship because of concern that their deteriorating financial condition ($5.4 trillion in outstanding obligations) would destabilize the financial system. The goals of the conservatorships are to preserve and conserve the assets and property of the Enterprises and enhance their ability to fulfill their missions. FHFA has the authority to manage the Enterprises and maintains the powers of the board of directors, officers, and shareholders. Treasury agreed to provide substantial financial support so that Enterprises could continue as going concerns to support mortgage financing, subsequently, the Federal Reserve Board committed to a variety of activities, including purchasing substantial amounts of their debt and securities to support housing finance, housing markets, and the financial markets more generally. In October 2008, Congress passed EESA, which authorized the creation of TARP to, among other things, buy up to $700 billion in troubled assets, such as mortgage-backed securities and any other financial instrument that the Secretary of the Treasury, in consultation with the Chairman of the Federal Reserve Board, determined that it needed to purchase to help stabilize the financial system. EESA created OFS within Treasury to administer TARP, which comprises a number of programs that were designed to address various aspects of the unfolding financial crisis. Early in the program, Treasury determined that providing capital infusions would be the fastest and most effective way to address the crisis. In return for these capital infusions, Treasury received equity in the hundreds of companies that have participated in the program. In return for receiving these capital infusions, TARP-recipients were subject to certain requirements and restrictions, such as dividend requirements and limits on executive compensation. The American Recovery and Reinvestment Act of 2009 (Recovery Act) amended and expanded EESA’s executive compensation provisions and directed Treasury to require appropriate standards for executive compensation and corporate governance of TARP recipients. On June 10, 2009, Treasury adopted an interim final rule to implement the law for executive compensation and corporate governance, including limits on compensation, providing guidance on the executive compensation and corporate governance provisions of EESA, and setting forth certain additional standards pursuant to authority under EESA. The requirements for executive compensation generally include: (1) limits on compensation that exclude incentives for senior executive officers to take unnecessary and excessive risks that threaten the value of TARP recipients; (2) provision for the recovery of any bonus, retention award, or incentive compensation paid to certain executives based on materially inaccurate statements of earnings, revenues, gains, or other criteria; (3) prohibition on “golden parachute” payments accrued to certain executives; (4) prohibition on payment or accrual of bonuses, retention awards, or incentive compensation to certain executives; and (5) prohibition on employee compensation plans that would encourage manipulation of earnings reported by TARP recipients to enhance employees’ compensation. The regulation required the establishment of Office of the Special Master for TARP Executive Compensation (Special Master) to review the compensation payments and structures of TARP recipients of “exceptional financial assistance,” which includes all of the companies in our study with the exception of the government-sponsored Enterprises. The Senior Preferred Stock Agreements between Treasury and the Enterprises negotiated prior to EESA and the Recovery Act included a requirement that FHFA consult with Treasury relating to executive compensation. A number of programs under TARP—designed to help stabilize institutions and financial markets—have resulted in Treasury having an ownership interest in such institutions. The Capital Purchase Program (CPP) is the largest TARP program and at its peak had more than 700 participants, including Bank of America and Citigroup. Created in October 2008, it aimed to stabilize the financial system by providing capital to viable banks through the purchase of preferred shares and subordinated debentures. These transactions generally provide that the banks pay fixed dividends on the preferred shares, that the debentures accrue interest, and that the banks issue a warrant to purchase common stock, preferred shares, or additional senior debt instruments. The Targeted Investment Program (TIP), established in December 2008, was designed to prevent a loss of confidence in financial institutions that could (1) result in significant market disruptions, (2) threaten the financial strength of similarly situated financial institutions, (3) impair broader financial markets, and (4) undermine the overall economy. Treasury determined the forms, terms, and conditions of any investments made under this program and considered the institutions for approval on a case- by-case basis. Treasury required participating institutions to provide warrants or alternative considerations, as necessary, to minimize the long- term costs and maximize the benefits to the taxpayers, in accordance with EESA. Only two institutions participated in TIP, Bank of America and Citigroup, and both repurchased their preferred shares and trust preferred shares, respectively, from Treasury in December 2009. Treasury has terminated the program. The Asset Guarantee Program (AGP), was created in November 2008 to provide a federal government guarantee for assets held by financial institutions that had been deemed critical to the functioning of the U.S. financial system. The goal of AGP was to encourage investors to keep funds in the institutions. According to Treasury, placing guarantee assurances against distressed or illiquid assets was viewed as another way to help stabilize the financial system. In implementing AGP, Treasury collected a premium on the risk assumed by the government that was paid in preferred shares that were exchanged later for trust preferred shares. Citigroup terminated its participation on December 23, 2009. Treasury has since terminated AGP. While the asset guarantee was in place, no losses were claimed by Citigroup and no federal funds were paid out. The AIG Investment Program—originally called the Systemically Significant Failing Institutions Program (SSFI)—was created in November 2008 to help avoid disruptions to financial markets from an institutional failure that Treasury determined would have broad ramifications for other institutions and market activities. AIG has been the only participant in this program and was provided the assistance because of its systemic importance to the financial system. The assistance provided under this program is reflected in the securities purchase agreements, which required Treasury to purchase preferred shares from AIG and entitles Treasury to dividends declared by AIG on these preferred shares and provide warrants to purchase common stock. The Automotive Industry Financing Program (AIFP) was created in December 2008 to prevent a significant disruption to the U.S. automotive industry. Treasury determined that such a disruption would pose a systemic risk to financial market stability and have a negative effect on the U.S. economy. The program was authorized to provide funding to support automakers during restructuring, to ensure that auto suppliers to Chrysler and GM received compensation for their services and products, and to support automotive finance companies. AIFP provided sizeable loans to Chrysler and GM (including a loan to GM that was convertible into shares of GMAC that were purchased with the proceeds). Treasury loaned up to $1.5 billion to Chrysler Financial, which was fully repaid on July 14, 2009. Ultimately the government obtained an equity stake through the restructurings and loan conversion. The Capital Assistance Program (CAP), established in February 2009, was designed to help ensure that qualified financial institutions have sufficient capital to withstand severe economic challenges. These institutions were required to meet eligibility requirements substantially similar to those used for CPP. A key component of CAP was the Supervisory Capital Assessment Program (SCAP), under which federal bank regulators, led by the Federal Reserve, conducted capital assessments, or “stress tests,” of large financial institutions. Participation in SCAP was mandatory for the 19 largest U.S. bank holding companies (those with risk-weighted assets of $100 billion or more as of December 31, 2008). The tests were designed to determine whether these companies had enough capital to absorb losses and continue lending even if economic and market conditions were worse than expected between December 2008 and December 2010. Institutions deemed not to have sufficient capital were given 6 months to raise private capital. In conjunction with the test, Treasury announced that it would provide capital through CAP to banks that needed additional capital but were unable to raise it through private sources. GMAC was the only institution determined to need additional capital assistance from Treasury. GMAC received the additional capital assistance through AIFP on December 30, 2009. Treasury announced the closure of CAP, on November 9, 2009. In addition to loans and guarantees, Treasury purchased or received various types of equity investments, ranging from common stock to subordinated debentures and warrants. Recognizing the challenges associated with the federal government having an ownership interest in the private market, the administration developed several guiding principles for managing its TARP investments. According to the principles issued in March 2009, the government will: Act as a reluctant shareholder. The government has no desire to own equity stakes in companies any longer than necessary and will seek to dispose of its ownership interests as soon as practical. The goal is to promote strong and viable companies that can quickly be profitable and contribute to economic growth and jobs without government involvement. Reserve the right to set up-front conditions. The government has the right to set up-front conditions to protect taxpayers, promote financial stability, and encourage growth. These conditions may include restructurings as well as changes to ensure a strong board of directors that selects management with a sound long-term vision to restore their companies to profitability and to end the need for government support as quickly as is practically feasible. Not interfere in the day-to-day management decisions of a company in which it is an investor. The government will not interfere with or exert control over day-to-day company operations. No government employees will serve on the boards or be employed by these companies. Exercise limited voting rights. As a common shareholder, the government will vote on only core governance issues, including the selection of a company’s board of directors and major corporate events or transactions. While protecting taxpayer resources, the government has said that it intends to be extremely disciplined as to how it uses even these limited rights. Federal financial regulators—Federal Reserve, FHFA, FDIC, OCC, and Office of Thrift Supervision—play a key role in regulating and monitoring financial institutions, including most of the institutions that received exceptional amounts of financial assistance. Because Bank of America, Citigroup, the Enterprises, and GMAC are all regulated financial institutions, not only were they monitored by Treasury as an investor but they continued to be regulated and overseen by their primary federal regulator. Specifically, the Federal Reserve oversees bank holding companies—including Bank of America, Citigroup, and GMAC—to help ensure their financial solvency. As regulated institutions, Bank of America, Citigroup, and GMAC were subject to ongoing oversight and monitoring before they received any government financial assistance and will continue to be regulated and supervised by their regulator after the assistance has been repaid. FHFA regulates and supervises the Enterprises and established their conservatorships in 2008. The Federal Reserve’s program for supervising large, complex banking organizations is based on a “continuous supervision” model that assigns a team of examiners dedicated to each institution and headed by a central point of contact. The Federal Reserve regularly rates the bank holding company’s operations, including its governance structure. Throughout the crisis, staff dedicated to the largest institutions have increased, as has the oversight and involvement in supervising the financial condition and operations of the institutions. In addition to its bank holding company regulatory and supervisory responsibilities, the Federal Reserve conducts the nation’s monetary policy by influencing the monetary and credit condition in the economy in pursuit of maximum employment, stable prices, and moderate long-term interest rates. Also, under unusual and exigent circumstances, the Federal Reserve has emergency authorization to assist a financial firm that is not a depository institution. The Federal Reserve used this authority to help address the recent financial crisis, which also resulted in the government acquiring an ownership interest in AIG. Subsidiary banks of Bank of America, Citigroup, and GMAC are supervised by other federal regulators, including OCC and FDIC. For example, OCC supervises Citibank—Citigroup’s national bank. In addition, FDIC oversees the banks’ condition and operations to gauge their threat to the deposit insurance fund. It also is the primary federal supervisor of GMAC’s bank. These bank supervisors generally use the same framework to examine banks for safety and soundness and compliance with applicable laws and regulations. As described above, they examine most aspects of the bank’s financial condition, including the bank’s management. Finally, FHFA was created in 2008 to oversee the housing enterprises, Fannie Mae and Freddie Mac. It replaced the Office of Federal Housing Enterprise Oversight and the Federal Housing Finance Board, and the Department of Housing and Urban Development’s mission authority was transferred to FHFA. The Enterprises are chartered by Congress as for- profit, shareholder-owned corporations, now currently under federal conservatorship. Using a risk-based supervisory approach, FHFA examines the Enterprises, including their corporate governance and financial condition. The federal government’s equity interest was acquired in a variety of ways and resulted from assistance aimed at stabilizing markets or market segments. Moreover, the government’s equity interest in the companies varies from company to company—ranging from preferred shares to common shares. In some cases, the government acquired an equity interest when it cancelled outstanding loans in exchange for common shares of the debtor. As of June 1, 2010, the government held an equity ownership interest in the form of preferred or common shares in the five major corporations—AIG, Chrysler, Citigroup, GM, GMAC—and the Enterprises. As shown in figure 1, the government holds the largest share of common stock in GM, but it also holds significant common stock in GMAC and smaller amounts, in terms of percentage, of Citigroup and Chrysler. It holds significant amounts of preferred shares, convertible preferred shares, or warrants for common shares in AIG and the Enterprises, as a result of the assistance provided. Treasury provided funds to Bank of America and the Enterprises in exchange for preferred stock with no voting rights except in limited circumstances, giving the federal government an equity interest in these companies. Specifically, the government’s $45 billion investment in Bank of America—which participated in CPP and TIP—gave Treasury ownership of nonvoting preferred shares in the company. Bank of America received $25 billion in CPP funds and $20 billion in TIP funds. The transactions were consummated pursuant to a securities purchase agreement, and the terms of the preferred shares acquired by Treasury included the right to payment of fixed dividends and no voting rights except in limited circumstances. On December 9, 2009, Bank of America repurchased all of its preferred shares previously issued to Treasury, ending the company’s participation in TARP. The company, as required, also paid over $2.7 billion in dividends to Treasury. On March 3, 2010, Treasury auctioned its Bank of America warrants for $1.54 billion. On September 6, 2008, when FHFA placed the Enterprises into conservatorships, Treasury provided financial assistance in consideration of equity interest. Under the transaction agreements, the Enterprises immediately issued to Treasury an aggregate of $1 billion of senior preferred stock and warrants to purchase common stock. The warrants allow Treasury to buy up to 79.9 percent of each entity’s common stock, can be exercised at any time, and are intended to help the government recover some of its investments if the Enterprises become financially viable. Under the terms of the preferred shares, Treasury is to receive dividends on the Enterprises’ senior preferred shares at 10 percent per year and, beginning March 31, 2010, quarterly commitment fees from the enterprises that have not yet been implemented. Further, the preferred share terms include restrictions on the Enterprises’ authority to pay dividends on junior classes of equity, issue new stock, or dispose of assets. At the end of the first quarter 2010, Treasury had purchased approximately $61.3 billion in Freddie Mac preferred stock and $83.6 billion in Fannie Mae preferred stock to cover losses. Because of the continued deteriorating financial condition of the Enterprises, the amount of government assistance to them is likely to increase. The government’s most substantive role is as conservator of the Enterprises, which is discussed later. Treasury has provided funds and other financial assistance to Citigroup, GMAC, GM, and Chrysler in exchange for common shares with voting rights, giving the federal government an equity stake in these companies. For Citigroup and GMAC, the common stock strengthened their capital structure, because the markets view common equity more favorably than preferred shares. Initially, Treasury invested $25 billion in Citigroup under CPP and an additional $20 billion under TIP. Treasury also entered into a loss sharing arrangement with Citigroup on approximately $301 billion of assets under AGP under which Treasury assumed $5 billion of exposure following Citigroup’s first losses of $39.5 billion. In exchange for this assistance, Treasury received cumulative nonvoting preferred shares and warrants to purchase common shares. FDIC also received nonvoting preferred stock for its role in AGP. Citigroup subsequently requested that Treasury exchange a portion of the preferred shares held by Treasury for common shares to facilitate an exchange of privately held preferred shares for common shares. Taken together, Treasury and private exchanges improved the quality of Citigroup’s capital base and thereby strengthened its financial position. From July 2009 to September 2009, Treasury exchanged its preferred shares in Citigroup for a combination of shares of common stock and trust preferred shares, giving the government a 33.6 percent ownership interest in Citigroup. Treasury now has voting rights by virtue of its common stock ownership. On December 23, 2009, Citigroup repurchased $20 billion of trust preferred shares issued to Treasury and the Federal Reserve, FDIC, and Treasury terminated the AGP agreement. FDIC and Treasury, collectively, kept approximately $5.3 billion in trust preferred shares, including the warrants that were associated with this assistance, as payment for the asset protection provided under AGP. As of May 26, 2010, Treasury still owned almost 6.2 billion shares, or 21.4 percent, of Citigroup’s common shares and warrants. Treasury’s AIFP assistance to GMAC, a bank holding company, resulted in the government owning more than half of GMAC by the end of 2009. After GMAC received approval from the Federal Reserve to become a bank holding company in December 2008, Treasury initially purchased $5 billion of GMAC’s preferred shares and received warrants to purchase an additional $250 million in preferred shares. Treasury exercised those warrants immediately. At the same time, Treasury also agreed to lend up to $1 billion of TARP funds to GM (one of GMAC’s owners), to enable GM to purchase additional equity in GMAC. On January 16, 2009, GM borrowed $884 million under that commitment, to purchase an additional interest in GMAC. Treasury terminated the loan on May 29, 2009, by exercising its option to exchange amounts due under that loan for an equity interest in GMAC. The Federal Reserve required GMAC to raise additional capital by November 2009 in connection with SCAP. On May 21, 2009, Treasury purchased $7.5 billion of mandatory convertible preferred shares from GMAC and received warrants that Treasury exercised at closing for an additional $375 million in mandatory convertible preferred shares, which enabled GMAC to partially meet the SCAP requirements. On May 29, 2009, Treasury exercised its option to exchange its right to payment of the $884 million loan it had made to GM for 35.4 percent of the common membership interests in GMAC. Treasury officials told us that exercising the option prevented the loan from becoming part of the GM bankruptcy process and therefore, was a measure intended to protect Treasury’s investment. According to the Federal Reserve, the exercising of the option strengthened GMAC’s capital structure. In November 2009, the Federal Reserve announced that GMAC did not satisfy the SCAP requirements because it was unable to raise additional capital in the private market and was expected to meet its SCAP requirement by accessing the AIFP. On December 30, 2009, Treasury purchased an additional $1.25 billion of mandatory convertible preferred shares and received warrants that Treasury exercised at closing for an additional $62.5 million in mandatory convertible preferred shares, and further purchased $2.54 billion in GMAC trust preferred securities and received warrants that Treasury exercised at closing for an additional $127 million in GMAC trust preferred securities, which were all investments under the AIFP. Also, in December 2009, Treasury converted $3 billion of existing mandatory convertible preferred shares into common stock, increasing its equity stake from 35 percent to 56.3 percent of GMAC common stock. As of March 31, 2010, Treasury owned $11.4 billion of GMAC mandatory convertible preferred shares and almost $2.7 billion of its trust preferred securities. Treasury’s equity stake in GM and Chrysler was an outgrowth of the $62 billion it loaned to the companies under AIFP before the companies filed for bankruptcy in June and April 2009, respectively. Through the bankruptcy process, these loans were restructured into a combination of debt and equity ownership in the new companies. As a result, Treasury owns 60.8 percent of the common equity and holds $2.1 billion in preferred stock in “new GM.” Also, Treasury owns 9.9 percent of common equity in the “new” Chrysler. As a common shareholder, Treasury has voting rights in both companies. The Federal Reserve and Treasury provided funds to AIG under a series of transactions that ultimately resulted in the federal government owning preferred stock and a warrant to purchase common stock. While the Federal Reserve is not AIG’s regulator or supervisor, FRBNY assisted AIG by using its emergency authority under Section 13(3) of the Federal Reserve Act to support the government’s efforts to stabilize systemically significant financial institutions. In the fall of 2008, the Federal Reserve approved assistance to AIG by authorizing FRBNY to create a facility to lend AIG up to $85 billion to address its liquidity needs. As part of this agreement, AIG agreed to issue convertible preferred stock to a trust to be created on behalf of the U.S. Treasury (the AIG Credit Facility Trust). This was achieved through the establishment of an independent trust to manage the U.S. Treasury’s beneficial interest in Series C preferred shares that, as of April 2010, were convertible into approximately 79.9 percent of the common stock of AIG that would be outstanding after the conversion of the Series C preferred shares in full. While the Series C preferred shares initially represented 79.9 percent of the voting rights, after Treasury’s November 2009 TARP investment, the amount of Series C preferred shares voting rights to be acquired was reduced to 77.9 percent to account for the warrant to purchase 2 percent of the common shares that Treasury received in connection with that TARP investment. A June 2009 20 to 1 reverse stock split adjusted the exercise price and number of shares associated with the Treasury warrant, allowing warrants held by Treasury to become convertible into 0.1 percent common equity. Part of the outstanding debt was restructured, when as noted above, Treasury agreed to purchase $40 billion of cumulative perpetual preferred stock (Series D) and received a warrant under TARP. The proceeds were used to reduce the debt owed to FRBNY by $40 billion. To address rating agencies’ concerns about AIG’s debt-equity ratios, FRBNY and Treasury further restructured AIG’s assistance in April 2009. Treasury exchanged its outstanding cumulative perpetual preferred stock (Series D) for perpetual preferred stock (Series E), which is noncumulative and thus, more closely resembles common equity than does the Series D preferred stock. Treasury has also provided a contingent $29.8 billion Equity Capital Facility to AIG whereby AIG issued to Treasury 300,000 shares of fixed- rate, noncumulative perpetual preferred stock (Series F). As AIG draws on the contingent capital facility, the liquidation preference of those shares automatically increases by the amount drawn. AIG also issued to Treasury a warrant to purchase up to 3,000 shares of AIG common stock. As of March 2010, the government has a beneficial interest in the Series C preferred shares held by the AIG trust, which is convertible into approximately 79.8 percent of the ownership of the common shares and the trustees have voting rights with respect to the Series C preferred shares. The government decided early on that in managing its ownership interest in private companies receiving exceptional TARP assistance, it would set up certain conditions in order to protect taxpayers, promote financial stability, and encourage growth. As noted in a recent SIGTARP report, these conditions include requiring limits on or changes to the companies’ governance structure such as boards of directors, senior management, executive compensation plans, lobbying and expense policies, dividend distributions, and internal controls and submission of compliance reports. Treasury also decided early on that it would not interfere with the daily business of the companies that received exceptional assistance— that is, it would not be running these companies. However, the level of its involvement in the companies has varied depending on the role it has assumed—investor, creditor, or conservator—as a result of the assistance it has provided. Both Treasury and the federal regulators directed that strong boards of directors and qualified senior management be in place to guide the companies’ operations. Treasury designated new directors and requested that some senior executives step down from their positions at some of the companies. Using its authority as conservator, FHFA appointed new members to the boards and senior management of the Enterprises. The federal regulators requested reviews of the qualifications of senior management at two of the companies. A significant number of new directors have been elected to the governing boards of all companies that received federal assistance. Of the 92 directors currently serving on these boards, 73 were elected since November 2008 (table 2). The board of Chrysler, for instance, is made up entirely of new members, and more than half of current board members of the other companies were designated after the government provided assistance. Many of these new directors were nominated to their respective boards because it was determined that a change in leadership was required as a result of the financial crisis, while others were designated by the government and other significant shareholders as a result of their common share ownership. In addition, federal regulators also asked the boards of directors at two of the companies to assess their oversight and evaluate management depth. The assessments were submitted to the regulators, and the board of directors subsequently made changes to their composition. The terms of Treasury’s agreements with AIG and Bank of America require the expansion of the board of directors of the company, if the relevant company fails to pay the dividends to Treasury for several quarters. Treasury would then have the right to designate the directors to be elected to fill the newly created vacancies on the board. While Bank of America made the required dividend payments prior to exiting TARP, AIG did not pay its required dividends. As a result, Treasury designated two new directors for election to AIG’s board on April 1, 2010. They were subsequently re-elected at the May 12, 2010, annual shareholders meeting. The trust agreement between FRBNY and the AIG trustees also provides the trustees with authority to vote the shares held in trust to elect or remove the directors of the company. In cooperation with AIG’s board, the AIG trustees were actively involved in the recruitment of six new directors who have experience in corporate restructuring, retail branding, or financial services, and believe that these new members will help see AIG through its financial challenges. The board, in turn, has elected two additional members to replace departing board members. The trustees stated that they kept FRBNY and Treasury officials apprised of the recruitment efforts. Treasury’s common equity investment in Citigroup, GM, Chrysler, and GMAC also gives it voting rights on the election or removal of the directors of these governing boards, among other matters. In addition, the agreements with GM, Chrysler, and GMAC specifically authorize Treasury to designate directors to these companies’ boards. As authorized in a July 10, 2009, shareholder agreement with GM, Treasury, as the majority shareholder, designated 10 directors who were elected to GM’s board, 5 of whom were former directors of “old GM.” Based on the smaller number of common shares they owned in the company, two other GM shareholders—Canada GEN Investment Corporation (owned by the Canadian government) and a Voluntary Employee Beneficiary Association composed of GM’s union retirees— each designated one director. As authorized in a June 10, 2009, operating agreement with Chrysler, Treasury designated three of nine directors, who in turn, collectively elected an additional member to the board. Chrysler’s other shareholders designated the other five board members, for a total of nine directors. Chrysler’s Voluntary Employee Benefit Association appointed one director, Fiat appointed three directors, and the Canadian government appointed one director. Under the operating agreement, the number of directors that Fiat has the right to designate increases as its ownership in Chrysler increases, with a concomitant decrease in the number of directors designated by Treasury. As authorized in a May 21, 2009, governance agreement with GMAC, Treasury appointed two new directors to the board because it held 35 percent of the company’s common stock. With the conversion of $3 billion in mandatory convertible preferred shares of GMAC on December 30, 2009, Treasury’s common ownership interest increased to 56.3 percent, authorizing it to appoint two more directors. On May 26, 2010, Treasury appointed a new director to GMAC (Ally Financial Inc., formerly GMAC Financial Services). The fourth director appointment is pending. As conservator of the Enterprises, FHFA has appointed new members to the boards of directors. The Director of FHFA has statutory authority under HERA to appoint members of the board of directors for the Enterprises based on certain criteria. FHFA’s former director, at the onset of conservatorships, decided to keep three preconservatorship board members at each Enterprise in order to provide continuity and chose the remaining directors for each board. Initially, on September 16, 2008, FHFA’s former director appointed Philip A. Laskawy and John A. Koskinen to serve as new nonexecutive chairmen of the boards of directors of the Enterprises. On November 24, 2008, FHFA reconstituted the boards of directors for the Enterprises and directed their functions and authorities. FHFA’s delegation of authority to the directors became effective on December 18-19, 2008, when new board members were appointed by FHFA. The directors exercise authority and serve on behalf of the conservator, FHFA. The conservator retains the authority to withdraw its delegations to the board and to management at any time. In addition to changes in the boards of directors, the companies receiving exceptional assistance have also made a few changes to their senior management (table 3). Some of these decisions were made by the companies’ boards of directors without consultation with Treasury or federal regulators. Specifically, Bank of America, Citigroup, and GMAC executives stated that the decisions to replace their chief executive officer (CEO) or chief financial officer (CFO) were made by the companies’ boards of directors without influence from Treasury or federal regulators. However, federal regulators had directed the banks to assess their senior management’s qualifications. After receiving government assistance, Bank of America’s shareholders approved an amendment to the corporation’s bylaws prohibiting any person from concurrently serving as both the company’s chairman of the board and CEO. As a result, the shareholders elected Walter Massey to replace Kenneth Lewis as chairman of the board in April 2009. Citigroup’s board of directors also appointed a new CFO in March 2009 and again in July 2009. The AIG trustees stated that they and the Treasury officials monitoring AIG’s investments were kept apprised of the selection of Robert Benmosche to replace Edward Liddy—who was put in place as AIG’s CEO on September 18, 2008, at the request of the government to help rehabilitate the company and repay taxpayer funds—as the new CEO in August 2009. Meeting minutes provided by the AIG trustees show that the trustees and FRBNY and Treasury officials discussed the CEO search process as it was occurring. The trustees and Treasury officials also met with Benmosche before he was elected as AIG’s new CEO. According to the trustees, they encouraged the AIG board to select the most qualified CEO, but that the final decision to elect Benmosche rested with the AIG’s board of directors. GM’s selection of new senior managers during the restructuring process was directly influenced by Treasury. For example, in March 2009, Treasury’s Auto Team requested that Rick Wagoner, GM CEO at the time, be replaced by Frederick “Fritz” Henderson, then the GM president. According to a senior Treasury official, the Auto Team had determined that the senior leadership in place at that time was resistant to change. But, rather than appointing an individual outside GM to serve as CEO, the team asked Fritz Henderson to serve as the CEO to provide some continuity in the management team. Henderson resigned on December 1, 2009, but the same Treasury official said that the Auto Team did not request his removal. The GM board of directors named Ed Whitacre to replace Henderson. After the partnership between Chrysler and Fiat was completed, Sergio Marchionne (CEO of Fiat) was elected as Chrysler’s new CEO on June 10, 2009. Subsequent to his election, all changes to Chrysler’s senior management were made by new company leadership without Treasury’s involvement. As the conservator, the FHFA director has the authority to appoint senior level executives at both Enterprises. On September 7, 2008, FHFA’s former director appointed Herbert M. Allison, Jr. as President and CEO for Fannie Mae and David M. Moffett as President and CEO of Freddie Mac. Michael Williams was promoted to CEO for Fannie Mae from his Chief Operation Officer position to replace Herbert M. Allison, Jr., who became Treasury’s Assistant Secretary for Financial Stability. On March 11, 2009, FHFA appointed John A. Koskinen as Freddie Mac’s interim CEO and on July 21, 2009, Charles Haldeman was appointed CEO of Freddie Mac. As a condition of receiving assistance under TARP, recipients must adhere to the executive compensation and other requirements established under EESA and under Treasury regulations (see table 4). In addition, Treasury’s agreements with these companies included provisions requiring the companies to adopt or maintain policies regarding expenses and lobbying, report to Treasury on internal controls, certify their compliance with agreement terms, restrict the amount of executive compensation deductible for tax purposes, and limit dividend payments, among others. In prior reports, GAO and SIGTARP had reviewed Treasury’s efforts in ascertaining the companies’ compliance with the key requirements in financial assistance programs, such as CPP. GAO had recommended to Treasury that it develop a process to ensure that companies participating in CPP comply with all the CPP requirements, including those associated with limitations on dividends and stock repurchase restrictions. Overtime, Treasury addressed these issues and established a structure to better ensure compliance with the agreements. Companies must adhere to the executive compensation and corporate governance rules as a condition for receiving TARP assistance. Treasury created the Office of the Special Master to, among other things, review compensation payments and structures for certain senior executive officers and most highly compensated employees at each company receiving exceptional TARP assistance. The Special Master is charged with determining whether these payments and structures under the plans are inconsistent with the purposes of the EESA executive compensation provisions and TARP or otherwise contrary to the public interest. On October 22, 2009, the Special Master issued his first determinations with respect to compensation structures and payments for the “top 25” employees of companies receiving exceptional TARP assistance. In reviewing the payment proposals the companies submitted for 2009, the Special Master noted that the companies in some cases (1) requested excessive cash salaries, (2) proposed issuance of stock that was immediately redeemable, (3) did not sufficiently tie compensation to performance-based benchmarks, (4) did not sufficiently restrict or limit financial “perks” or curb excessive severance and executive retirement benefits, and (5) did not make sufficient effort to fold guaranteed compensation contracts into performance-based compensation. As a result, he rejected most of these initial proposals and approved a modified set of compensation structures and payments. For the 2009 top 25 compensation structures and payments, table 5 shows that the Special Master required that AIG, Bank of America, and Citigroup reduce cash compensation for their top executives by more than 90 percent from the previous year. Although Bank of America repurchased preferred shares on December 9, 2009, it agreed to remain subject to the Special Master’s determination for its top 25 employees for 2009. Similarly, Citigroup repurchased its TIP trust preferred shares on December 23, 2009, but also agreed to abide by all determinations that had been issued for 2009, including the Special Master’s requirement that Citigroup reduce its cash compensation by $244.9 million, or 96.4 percent from 2008. While Citigroup had the largest percentage cash reduction, GMAC had the largest overall reduction in total direct compensation (both cash and stock)— GMAC was required to reduce its total direct compensation by $413.3 million, or more than 85 percent of 2008 levels. Table 5 also shows that the Special Master approved a compensation structure for the most highly compensated executive at AIG that provides up to $10.5 million in total direct compensation on an annual basis. On December 11, 2009, the Special Master released his second round of determinations on executive compensation packages for companies that received exceptional TARP assistance. These determinations covered compensation structures for the “next 75” most highly compensated employees including executive officers who were not subject to the October 22, 2009, decisions. Unlike the determination for the top 25 employees, which addressed the specific amounts paid to individuals, the Special Master was required only to approve the compensation structure for this second group of employees. The determination covered four companies: AIG, Citigroup, GMAC, and GM. The Special Master also rejected most of the submitted proposals and required that they be modified to include the following features. Cash salaries generally no greater than $500,000, except in exceptional cases, as specifically certified by the company’s independent compensation committee. Limits on cash compensation in most cases to 45 percent of total compensation, with all other pay in company stock in order to align executives’ interests with long-term value creation and financial stability. In most cases, at least 50 percent of each executive’s pay be held or deferred for at least 3 years, aligning the pay each executive actually receives with the long-term value of the company. Payment of incentives only if the executive achieves objective performance targets set by the company and reviewed by the Special Master that align the executives’ interests with those of shareholders and taxpayers. Limits on total incentives for all covered executives to an aggregate fixed pool that is based on a specified percentage of eligible earnings or other metrics determined by the compensation committee and reviewed by the Special Master. A “clawback” provision covering incentive payments to covered executives that will take effect if the achievements on which the payments are based do not hold up in the long term or if an executive engages in misconduct. On March 23, 2010, the Special Master released his determinations of compensation structures and payments (for 2010) for the top 25 employees at the five remaining firms that received exceptional TARP assistance from taxpayers: AIG, Chrysler, Chrysler Financial, GM, and GMAC. Examples of his determinations include a 63 percent decrease in cash compensation from 2009 levels for AIG, 45 percent decrease for GMAC, and 7.5 percent decrease for GM executives. Chrysler’s 2010 cash salary rates for its executives remained at the same level as 2009. Similar to the determination for 2009, the Special Master approved an annual compensation structure for AIG’s highest compensated executive that provides up to $10.5 million in total direct compensation on an annual basis. Overall, the 2010 determinations included the following significant changes. On average, a 33 percent decrease in overall cash payments from 2009 levels for affected executives. On average, a 15 percent decrease in total compensation from 2009 levels for affected executives. Cash salaries frozen at $500,000 or less, unless good cause is shown. Eighteen percent of executives subject to the March 2010 determinations (21 employees) were approved for cash salary rates greater than $500,000. HERA provides the Director of FHFA, in a conservatorship, the authority to establish executive compensation parameters for both the Enterprises. On December 24, 2009, the FHFA director approved Fannie Mae and Freddie Mac 2010 compensation packages. The compensation package for each chief executive officer was established at $6 million with each package consisting of a base pay amount of $900,000, deferred pay of $3.1 million, and a long-term incentive pay of $2 million. Twelve other Fannie Mae executives and 14 other Freddie Mac executives are covered by the same system, but will receive lesser amounts. The deferred pay will be paid quarterly in 2011 to executives still at the Enterprises, and half will vary based on corporate performance. The long-term incentive pay will vary according to individual and corporate performance. Pursuant to the preferred stock purchase agreements, FHFA consulted with the Special Master for TARP Executive Compensation with regards to the 2010 compensation packages. Compensation of the executives at the Enterprises is presented in the form of cash payments. According to the Special Master and the FHFA Acting Director, compensation in the form of stock was viewed as ineffective because of the questionable value of the shares and the potential incentives stock compensation might generate to take excessive risk in hopes of making the stock valuable. In addition to executive compensation, Treasury also placed requirements pertaining to other business activities, including expense and luxury expenditures, lobbying, dividends and stock repurchases, and internal controls and compliance. For example, companies receiving exceptional assistance are required to implement and maintain an expense policy that covers the use of corporate aircraft, lease or acquisition of real estate, expenses related to office or facility renovations or relocations, expenses related to entertainment and holiday parties, hosting and sponsorship of conferences and events, travel accommodations and expenditures, and third-party consultations, among others. They are also required to implement and maintain a lobbying policy that covers lobbying of U.S. government officials, governmental ethics, and political activity. Furthermore, until Treasury no longer owns company debt or equity securities (e.g. common, preferred, and trust preferred stock), the companies may not declare or pay any dividends; make any distribution on the company’s common stock; or redeem, purchase, or acquire any of the company’s equity securities. They are also prohibited from redeeming or repurchasing any preferred or trust preferred stock from any holder unless the company offers to repurchase a ratable portion of the preferred shares then held by Treasury on the same terms and conditions, with limited exceptions. Lastly, the companies agreed to establish appropriate internal controls with respect to compliance with each of the requirements in agreement. They are required to report to Treasury on a quarterly basis regarding the implementation of those controls and their compliance with the requirements (including any instances of noncompliance). They are also required to provide signed certifications from a senior officer attesting that, to the best of his or her knowledge, such report(s) are accurate. Treasury states that it does not interfere with or exert control over certain activities of companies that received exceptional assistance. Nevertheless, SIGTARP and GAO found that the level of government involvement in the companies varied among the recipients, depending on whether Treasury and other federal entities are investors, creditors, or conservators. For example, Treasury’s involvement in Bank of America, Citigroup, and GMAC has been limited because, in exchange for its investments, Treasury—as an investor—initially received preferred shares that did not have voting rights except in certain limited circumstances, such as amendments to the company charter, in the case of certain mergers, and the election of directors to the companies’ boards in the event that dividends are not paid for several quarters. As of April 30, 2010, Treasury still held an ownership interest in Citigroup because of the June 9, 2009, agreement that exchanged Treasury’s preferred shares for common shares. Treasury’s initial investment in GMAC also came in the form of preferred shares with limited voting rights. As an up-front condition to its May 2009 investments in Chrysler and GMAC, Treasury played a central role in establishing the agreement reached between GMAC and Chrysler in April 2009 that made retail and wholesale financing available to Chrysler’s dealer network. Specifically, Treasury provided GMAC with $7.5 billion on May 21, 2009, of which $4 billion was to be used to support Chrysler’s dealers and consumers. According to Treasury officials, this agreement was part of the initial restructuring of the companies that was done under the auspices of the bankruptcy court, a situation that is quite different from the Bank of America and Citigroup investments. Senior executive officers at Bank of America, Citigroup, and GMAC agreed that Treasury was not involved in the daily operations of their companies, but they noted that the federal regulators—the Federal Reserve, FDIC, and OCC—had increased and intensified their bank examinations. The executives explained that the closer scrutiny was the result of the financial crisis, and was not directly tied to TARP assistance. GMAC’s senior officers further explained that the Federal Reserve’s involvement with their company had been due, in part, to its obtaining bank holding company status upon conversion of Ally Bank (formerly known as GMAC Bank) from an industrial loan company to a commercial bank. As a result of the conversion, GMAC has had to work closely with the Federal Reserve to establish policies, procedures, and risk management practices to meet regulatory requirements of a bank holding company. As both an investor in and creditor of AIG, GM, and Chrysler, the government has been more involved in some aspects of the companies’ operations than it has been with other companies. Treasury, FRBNY, and the AIG trustees closely interact with senior management to discuss restructuring efforts, liquidity, capital structure, asset sales, staffing concerns, management quality, and overall strategic plans for the company. Members of Treasury’s AIG team meet regularly with AIG management, attend board committee meetings, and provide input on decisions that affect the direction of the company. Similarly, FRBNY (as creditor) also attends board meetings as an observer, and FRBNY and the AIG trustees (as overseers of the AIG Trust) receive various AIG financial reports, review the quality of senior management, and provide their opinions on company strategy and major business decisions. Treasury officials continue to monitor GM and Chrysler’s strength through monthly and quarterly financial, managerial, and operations-related reports, and regular meetings with senior management, but stated that they do not micro-manage the companies. However, the government’s stated “hands-off” approach towards managing its equity interest applied only after GM and Chrysler exited bankruptcy. In the period before and during the bankruptcies, Treasury played a significant role in the companies’ overall restructuring and certain overarching business decisions. For example, Treasury issued viability determinations in which it stated that GM needed to decrease its number of brands and nameplates, and Chrysler needed to improve the quality of its vehicles. Treasury’s credit agreements with the automakers established additional requirements for the companies. For example, the companies are required to maintain their domestic production at certain levels, abstain from acquiring or leasing private passenger aircrafts, and provide quarterly reports on internal controls. Treasury officials pointed out that another reason for differences is that AIG, GM, and Chrysler are not subject to the extensive federal regulations that Bank of America, Citigroup, and GMAC, as bank holding companies, face. Moreover, officials believe that the path to exit the investments in the case of AIG, GM, Chrysler, and GMAC is more complex than in the case of Bank of America and Citigroup. Under HERA, FHFA has broad authority over the Enterprises’ operations while they are in conservatorship. The law authorizes FHFA to appoint members of the board of directors for both Enterprises based on prescribe appropriate regulations regarding the conduct of immediately succeed to all powers, privileges, and assets of the regulated Enterprises; provide for the exercise of any functions of any stockholder, officer, or director of the entity; and take any actions that may be necessary to put the entity into a solvent and operationally sound state and conserve and preserve the assets of the entity. According to FHFA officials, the agency has generally delegated significant day-to-day responsibility for running the Enterprises to the management teams that the agency has put in place for two reasons: First, FHFA has limited staff resources. Second, the Enterprises are better positioned with the expertise and infrastructure necessary to carry out daily business activities, such as the routine purchases of mortgages from lenders and securitization of such loans. At the same time, FHFA maintains its fulltime examination and supervisory programs for the Enterprises. However, FHFA, as the Enterprises’ conservator and regulator, has instituted a number of requirements, policies, and practices that involve them in the Enterprises. For example: Lobbying activities for both Enterprises have been dismantled and prohibited, and FHFA directly reviews all the Enterprises’ responses to congressional members. Officials from FHFA’s Office of Conservatorship Operations attend the board meetings and senior executive meetings at both of the Enterprises. FHFA reviews and approves performance measures for both of the Enterprises. Each Enterprise has developed scorecards with criteria that focus on safety and soundness issues while at the same time aligning loan modification goals. FHFA reviews to confirm that they have no objections to SEC filings for both of the Enterprises. The Division of Enterprise Regulation within FHFA was established by a statutory mandate within HERA to examine all functions of the Enterprises, with the exception of those explicit accounting examinations that are handled by the Office of the Chief Accountant. FHFA and Treasury work closely with the Enterprises to implement a variety of programs that respond to the dramatic downturn in housing finance markets. FHFA monitors the Enterprises’ implementation of Treasury’s Home Affordable Modification Program (HAMP). The Enterprises are acting as Treasury’s agents in implementing the program and ensuring that loan servicers comply with program requirements, with Fannie Mae as the program’s administrator and Freddie Mac as Treasury’s compliance agent for the program. FHFA has also provided advice and resources to Treasury in designing the Making Home Affordable Program. FHFA and Treasury stay in contact with the Enterprises on a daily basis about HAMP. Executives for FHFA meet with executives of both of the Enterprises on a weekly basis, and Treasury executives meet with the Enterprises’ leadership monthly. As a shareholder with respect to TARP recipients, the government has taken a variety of steps to monitor its investments in each company receiving exceptional assistance, while at the same time considering potential exit strategies. First, Treasury developed a set of guiding principles that outline its approach for monitoring investments in the companies. Second, OFS has hired asset managers to help monitor its investments in certain institutions, namely Citigroup and Bank of America. Third, Treasury’s Auto Team (or other Treasury investment professionals) manages investments in GM, Chrysler, and GMAC made under AIFP. Fourth, the Federal Reserve and FRBNY collaborate with Treasury in monitoring the Federal Reserve’s outstanding loan to and the government’s equity investments in AIG. Finally, because Treasury’s ownership in the Enterprises is not part of TARP, staff outside of OFS is responsible for monitoring these investments. Given the varied forms of ownership interest and the complexity of many of the investments, Treasury will likely have to develop a unique exit strategy for each company. The divestment process, however, is heavily dependent on company management successfully implementing strategies discussed with their regulators and Treasury. Further, external factors, such as investors demand for purchasing securities of these companies receiving exceptional assistance and broader market conditions, must be considered when implementing exit strategies. Because most of the shares are expected to either be sold in a public offering or be redeemed or repaid using funds raised in the public markets, the financial markets must be receptive to government efforts. A public offering of shares, such as those considered for AIG subsidiaries American International Assurance Company, Ltd and American Life Insurance Company emphasizes the importance of market demand. Congressional action will be needed to determine the long-term structures and exit strategies for the Enterprises. Treasury has stated that it is a reluctant shareholder in the private companies it has assisted and that it wants to divest itself of its interests as soon as is practicable. In managing these assets, Treasury has developed the following guiding principles. Protect taxpayer investment and maximize overall investment returns within competing constraints. Promote the stability of financial markets and the economy by preventing disruptions. Bolster markets’ confidence to increase private capital investment. Dispose of the investments as soon as it is practicable and in a manner that minimizes the impact on financial markets and the economy. Treasury relied on its staff and asset managers to monitor its investments in Bank of America and Citigroup. Treasury officials said that the asset managers value the investments including the preferred securities and warrants. This valuation process includes tracking the companies’ financial condition on a daily basis using credit spreads, bond prices, and other financial market data that are publicly available. Treasury also uses a number of performance indicators, including liquidity, capital levels, profit and loss, and operating metrics to monitor their financial condition. The asset managers report regularly to Treasury and provide scores that track the overall credit quality of each company using publicly available information. For the bank holding companies, Treasury monitors the values of its investments, whereas, the Federal Reserve and other regulators monitor the financial condition of these institutions as part of their role as supervisory authorities. While federal regulators routinely monitor the financial condition of the financial institutions they supervise, this oversight is separate from the monitoring Treasury engages in as an equity investor. This supervisory monitoring is related to the regulatory authority of these agencies and not to investments made under TARP. For example, bank regulators had daily contact with Bank of America, Citigroup, and GMAC as they oversee the banks activities and help ensure their safety and soundness and monitor their financial condition. This daily interaction involves discussions about the institutions’ financial condition and operations. Moreover, the Federal Reserve and OCC officials said that they do not share supervisory information with Treasury to avoid a potential conflict of interest. Rather than requiring the development of an exit strategy by Treasury, Bank of America and Citigroup, with the approval of their federal banking regulators, repurchased preferred shares and trust preferred shares from Treasury in December 2009. The holding companies and their regulators share the duty of identifying the appropriate time to repay the assistance provided through Treasury’s purchase of preferred equity. The regulators leveraged their onsite examiners to provide information on the overall health of the banks and their efforts to raise capital. In September 2009, Bank of America and Citigroup initiated the process by informing the Federal Reserve that they wanted to redeem their TARP funds. Federal Reserve officials told us that in conjunction with FDIC and OCC, they reviewed Bank of America’s and Citigroup’s capital positions and approved the requests using primarily two criteria. First, the institutions had to meet the TARP redemption requirements outlined under SCAP. Second, they had to raise at least 50 percent of the redemption amount from private capital markets. In December 2009, Bank of America and Citigroup redeemed the preferred shares and the trust preferred shares, respectively, that Treasury held. In contrast to the process of unwinding trust preferred shares, in developing a divestment strategy for the common stock held in Citigroup, Treasury and its asset manager will evaluate market conditions and time the sale in an attempt to maximize taxpayers return. On December 17, 2009, Treasury announced a plan to sell its Citigroup common stock over a 6- to 12-month time frame. Treasury plans to use independent investment firms to assist in an orderly sale of these shares. A recent example of the difficulties that could be encountered occurred when Treasury announced plans to sell its Citigroup common shares in December 2009 following share sales by Bank of America and Wells Fargo. Market participants said at that time the supply of bank shares in the market exceeded demand and thus lowered prices. Selling the Citigroup shares in that market environment would have recouped less money for the taxpayers, so Treasury postponed the proposed sales. In March 2010, Treasury announced that it hired Morgan Stanley as its sales agent to sell its shares under a pre-arranged written trading plan. In April 2010, Treasury further announced that Citigroup had filed the necessary documents with SEC covering Treasury’s plan sale. According to Treasury’s press release, it began selling common shares in the market in an orderly fashion under a prearranged written trading plan with Morgan Stanley. Initially, Treasury provided Morgan Stanley with discretionary authority to sell up to 1.5 billion shares under certain parameters outlined in the trading plan. However, Treasury said that it expects to provide Morgan Stanley with authority to sell additional shares beyond this initial amount. According to Treasury officials, Morgan Stanley is providing on-going advice and ideas to Treasury regarding the disposition in order to assist Treasury in meeting its objectives. To manage its debt and equity investment in the automotive companies that received assistance and determine when and how to exit, Treasury monitors industry and broader economic data, as well as company-specific financial metrics. The information is important both for Treasury’s management of its equity in the companies and the repayment of the companies’ term loans, because it enables Treasury to determine how receptive the market will be to an equity sale—which affects the price at which Treasury can sell—and how likely it is that the companies will have sufficient liquidity to repay the loans. While the companies in the other categories discussed in this section also rely on the economic well-being of the country, consumer purchases of new cars are highly correlated with the health of the overall economy, making these broader measures especially relevant when discussing the automotive industry. In addition to monitoring industry and broader economic data, Treasury reviews financial, managerial, and operational information that the companies are required to provide under the credit and equity agreements with Treasury. Treasury will also be monitoring, as needed, information beyond that which is delineated in these agreements with Treasury, for example updates on current events such as the sale of the Saab brand. The companies provide the information, as needed, and the items specified in the agreements to Treasury in monthly reporting packages. Treasury officials said that they reviewed and analyzed the reports they received to identify issues, such as actual market share that lagged behind the projected market share, excess inventory, or other signs that business might be declining. While Treasury has maintained that it will not direct the companies to take specific actions, it does notify the companies’ management and the Secretary of the Treasury if it sees any cause for concern in the financial reports, such as actual market share lagging behind projected market share. In addition to reviewing financial information, Treasury officials meet quarterly in person with the companies’ top management to discuss the companies’ progress against their own projections and Treasury’s projections. Important findings that result from the review of financial reports or management meetings are conveyed to key staff in OFS and other Treasury offices with responsibilities for managing TARP investments. This level of access was the result of the various legal and other agreements with the companies. Treasury will determine when and how to divest itself of its equity stake in GM, Chrysler, and GMAC. Treasury officials said that they would consider indicators such as profitability and prospects, cash flow, market share, and market conditions to determine the optimal time and method of sale. However, these efforts are complicated by the fact that Treasury shares ownership of GM and Chrysler with the Canadian government and other third parties. Treasury has yet to announce a formal exit plan but has publicly stated that a public offering of its shares in GM is likely, and, in June 2010, provided guidance on its role in the exploration of a possible initial public offering of the common stock of GM. Treasury is still considering both a public offering and a private sale of the common stock it owns in Chrysler. The companies’ term loans—the other component of Treasury’s investment—were scheduled to be repaid by July 2015 for GM and by June 2017 for Chrysler. In April 2010, GM repaid the remaining balance on the $6.7 billion loan from Treasury. GM made this payment using funds that remained from the $30.1 billion Treasury had provided in June 2009 to assist with its restructuring. Our November 2009 report on the auto industry noted that the value of GM and Chrysler would have to grow tremendously for Treasury to approach breaking even on its investment, requiring that Treasury temper any desire to exit as quickly as possible with the need to maintain its equity stake long enough for the companies to demonstrate sufficient financial progress. This report also included three recommendations related to Treasury’s approach to managing its assets and divesting itself of its equity stake in Chrysler and GM. First, we recommended that Treasury ensure that it has the expertise needed to adequately monitor and divest the government’s investment in Chrysler and GM, and obtain needed expertise where gaps are identified. Following this recommendation, Treasury hired two additional staff to work on the Auto Team, which is composed of analysts dedicated solely to monitoring Treasury’s investments in the companies. Treasury also hired Lazard LLC in May 2010 to act as an advisor on the disposition of Treasury’s investment in GM. Second, we recommended that Treasury should report to Congress on its plans to assess and monitor the companies’ performance to help ensure that they are on track to repay their loans and to return to profitability. In response to this recommendation, Treasury stated that it already provides updates to TARP oversight bodies including the Congressional Oversight Panel and SIGTARP, concerning the status of its investments and its role in monitoring the financial condition of Chrysler and GM and that it will provide additional reports as circumstances warrant. Third, we recommended that Treasury develop criteria for evaluating the optimal method and timing for divesting the government’s ownership stake in Chrysler and GM. In response to this recommendation, Treasury stated that members of the Auto Team are experienced in selling stakes in private and public companies and are committed to maximizing taxpayer returns on Treasury’s investment. Treasury also stated that private majority shareholders typically do not reveal their long-term exit strategies in order to prevent other market participants from taking advantage of such information. However, we note that because Treasury’s stakes in the companies represent billions of taxpayer dollars, Treasury should balance the need for transparency about its approach with the need to protect certain proprietary information, the release of which could put the companies at a competitive disadvantage or negatively affect Treasury’s ability to recover the taxpayers’ investment. Moreover, Treasury could provide criteria for an exit strategy without revealing the precise strategy. Although GMAC is a bank holding company, it received assistance under AIFP. While investment in GMAC was previously managed by Treasury’s Auto Team, the investment in GMAC is currently managed by other Treasury officials. This team uses many of the same indicators that are used for bank holding companies. For instance, to monitor GMAC’s condition, the Treasury’s team views liquidity and capital levels at the company and observes management’s strategic decision making. Due to it not being publicly traded and the challenges it faces in its transition to a more traditional bank holding company model, Treasury is more actively involved in managing and valuing its investment in the company. As of January 27, 2010, Treasury had not decided how it would divest its GMAC preferred shares or recommended a time frame for the divestment. The Federal Reserve and FDIC will be involved in the approval process that would allow GMAC to exit TARP by repurchasing its preferred shares. Treasury could recover its investment in GMAC preferred shares through the same process used to exit its preferred equity investments in Citigroup and Bank of America, but other options exist. For example, Treasury could sell its preferred shares to a third party, convert its preferred shares into common equity and sell those shares, or hold the preferred shares to maturity. Throughout 2009, the company continued to experience significant losses as it attempted to follow through on its strategies as a relatively new, independent company. As we have seen, Treasury purchased $3.8 billion in preferred shares ($2.54 billion of trust preferred shares and $1.25 billion of mandatory convertible preferred shares) from GMAC on December 30, 2009, because the company could not raise capital in the private markets to meet its SCAP requirements. According to Treasury officials, for its common stock in GMAC, Treasury is continuing to explore many options to exit its investment, including an initial public offering or other alternatives. Divesting itself of GMAC’s common stock will be more difficult because the shares are not currently publicly traded. Treasury could divest its GMAC common stock through multiple methods, including by making a public offering of its shares as company officials have suggested, selling the stock to a buyer or buyers through a private sale, or selling the stock back to the company as the company builds up capital. The Federal Reserve, FRBNY, and Treasury share responsibility for managing the government’s loan to and investment in AIG, but the trustees and Treasury must develop exit strategies for divesting their interest in AIG. The Federal Reserve and FRBNY have different roles than they do in overseeing the bank holding companies, because their relationship with AIG is not a supervisory one but a relationship between creditor and borrower. The Federal Reserve and FRBNY have acted to ensure that AIG maintains adequate capital levels after it suffered a severe loss of capital in 2008 that compromised its ability to sell certain businesses and maintain its primary insurance subsidiaries as viable businesses. A strengthened balance sheet, access to new capital, profitability, and lower risk levels are important in tracking AIG’s progress in returning to financial health. In order to monitor this progress, the Federal Reserve, FRBNY, and Treasury use various indicators, including liquidity, capital levels, profit and loss, and credit ratings. Although each of these entities monitors AIG independently, they share information on such indicators as cash position, liquidity, regulatory reports, and other reports as necessary. AIG is also responsible for regularly providing periodic internal reports as specified in the FRBNY credit agreement and the Treasury securities purchase agreements. According to the AIG trustees, in monitoring AIG, they rely on information gathered by the FRBNY, Treasury, and AIG, and their respective outside consultants, to avoid, to the extent possible, redoing work that has already been done at unnecessary cost. The AIG trustees are responsible for voting the trust stock, working with AIG and its board of directors to ensure corporate governance procedures are satisfactory, and developing a divestiture plan for the sale or other disposition of the trust stock. As we have seen, government assistance to AIG was provided by or is held by FRBNY, the AIG Trust, and Treasury, which are independently responsible for developing and implementing a divestment plan and must coordinate their actions. Over time, more of the government’s credit exposure has been converted to equity that potentially poses greater risk to the federal government. For example, Treasury purchased $40 billion of preferred shares and the proceeds were used to pay down the balance of the FRBNY Revolving Credit Facility. More recently, in December 2009, FRBNY accepted preferred equity interest in two AIG-created special purpose vehicles that own American International Assurance Company, Ltd and American Life Insurance Company—AIG’s leading foreign life insurance companies. In exchange, FRBNY reduced the amount AIG owed on the Revolving Credit Facility by $25 billion. Repayment of AIG’s remaining $27 billion debt will depend, in part, on the markets’ willingness to finance the company with new funds following its return to financial health. According to officials at Treasury and the Federal Reserve, AIG must repay the FRBNY credit facility before the AIG Trust can, as a practical matter, divest its equity shares. As a result, the AIG trustees said that they would begin developing an exit strategy once AIG had repaid its debt to FRBNY, which is due no later than September 13, 2013. According to the AIG trustees and Treasury officials, while Treasury and the AIG Trust are responsible for developing independent exit strategies, they plan to coordinate their efforts. The Treasury team that manages the AIG investment has been running scenarios of possible exit strategies but has not decided which strategy to employ. A number of options are being considered by the AIG trust for divesting the Series C Preferred Stock, one of which is to convert the Series C Preferred Stock to common stock and divest such common stock through a public offering or a private sale. Treasury has multiple options available for divesting its preferred shares, including having AIG redeem Treasury’s shares, converting the shares to common stock that would subsequently be sold in a public offering, or selling the shares to an institutional buyer or buyers in a private sale. According to Treasury officials, Treasury is devoting significant resources to planning the eventual exit strategy from its AIG investments. When AIG will be able to pay the government completely back for its assistance is currently unknown because the federal government’s exposure to AIG is increasingly tied to the future health of AIG, its restructuring efforts, and its ongoing performance as more debt is exchanged for equity. Therefore, as we noted in our April 2010 report on AIG, the government’s ability to fully recoup the federal assistance will be determined by the long-term health of AIG, the company’s success in selling businesses as it restructures, and other market factors such as the performance of the insurance sectors and the credit derivatives markets that are beyond the control of AIG or the government. In March 2010, the Congressional Budget Office estimated that the financial assistance to AIG may cost Treasury as much as $36 billion compared to the $30 billion estimated in September 2009 by Treasury. While AIG is making progress in reducing the amount of debt that it owes, this is primarily due to the restructuring of the composition of government assistance from debt to equity. FHFA, in its roles as conservator, safety and soundness supervisor, and housing mission regulator for the Enterprises, has adopted several approaches to monitoring their financial performance and operations. FHFA officials said that they have monitored the Enterprises’ financial performance in meeting the standards established in the scorecards and will continue to do so. Further, FHFA monitors, analyzes, and reports on the Enterprises’ historical and projected performance on a monthly basis. FHFA provides information based on public and nonpublic management reports, and the fair value of net assets is defined in accordance with generally accepted accounting principles. In addition, FHFA officials said that the agency’s safety and soundness examiners are located at the Enterprises on a full-time basis, and also monitor their financial performance, operations, and compliance with laws and regulations through conducting examinations, holding periodic meetings with officials, and reviewing financial data, among others things. FHFA is significantly involved as conservator with the Enterprises when it comes to reporting financial information and requesting funding from Treasury. FHFA puts together a quarterly request package that is reviewed through several levels of management, and it is ultimately signed off on by the Acting Director of FHFA before it is sent to the Under Secretary for Domestic Finance at Treasury for approval as the official request for funding. Although the structure of the assistance to the Enterprises has remained constant, the amount of assistance has steadily increased. Treasury increased the initial funding commitment cap from $100 billion to $200 billion per Enterprise in February 2009, and the decision was made in December 2009 to lift the caps to include losses from 2010 through 2012. Treasury stated it raised the caps when it did because its authority to purchase preferred shares under HERA expired on December 31, 2009. While Treasury did not believe the Enterprises would require the full $200 billion authorized per Enterprise prior to December 31, 2009, it lifted the caps to reassure the markets that the government would stand behind them going forward. At the end of first quarter 2010, Treasury had purchased approximately $61.3 billion in Freddie Mac preferred stock and $83.6 billion in Fannie Mae preferred stock under the agreements. While FHFA and Treasury are monitoring the Enterprises’ financial performance and mission achievement through a variety of means, exit strategies for the Enterprises differ from those for the other companies that have also received substantial government assistance. Given the ongoing and significant financial deterioration of the Enterprises—the Congressional Budget Office projected that the operations of the Enterprises would have a total budgetary cost of $389 billion over the next 10 years—FHFA and other federal officials have said that the Enterprises will probably not be able to return to their previous organizational structure as publicly-owned private corporations with government sponsorship. Many observers have stated that Congress will have to re- evaluate the roles, structures, and performance of the Enterprises and to consider options to facilitate mortgage financing while mitigating safety and soundness and systemic risk concerns. In a September 2009 report, we identified and analyzed several options for Congress to consider in revising the Enterprises’ long-term structures. These options generally fall along a continuum, with some overlap in key areas. Establishing the Enterprises as government corporations or agencies. Under this option, the Enterprises would focus on purchasing qualifying mortgages and issuing mortgage-backed securities but eliminate their mortgage portfolios. FHA, which insures mortgages for low-income and first-time borrowers, could assume additional responsibilities for promoting homeownership for targeted groups. Reconstituting the Enterprises as for-profit corporations with government sponsorship but placing additional restrictions on them. While restoring the Enterprises to their previous status, this option would add controls to minimize risk. For example, it would eliminate or reduce mortgage portfolios, establish executive compensation limits, or convert the Enterprises from shareholder-owned corporations to associations owned by lenders. Privatize or terminate them. This option would abolish the Enterprises in their current form and disperse mortgage lending and risk management throughout the private sector. Some proposals involve the establishment of a federal mortgage insurer to help protect mortgage lenders against catastrophic mortgage losses. While there is no consensus on what the next steps should be, whatever actions Congress takes will have profound impacts on the structure of the U.S. housing finance system. The Enterprises’ still-dominant position in housing finance is an important consideration for any decision to establish a new structure. Finally, some of the companies receiving exceptional assistance have taken a number of steps to repay the financial assistance owed the government and to repurchase their preferred shares in light of the significant restrictions put in place to encourage companies to begin to repaying and exiting the programs as soon as practicable. At the same time, the government continues to take steps to establish exit strategies for the remaining companies and in some cases the federal government’s financial exposure to these companies may exist for years before the assistance is fully repaid. In other cases, the federal government may not recover all of the assistance provided. For example, where the government has an equity interest, its ability to recover what has been invested depends on a variety of external factors that are beyond the control of the institution and the government. Moreover, as of June 1, 2010, the Enterprises have continued to borrow from Treasury. However, ongoing monitoring of the institutions and the government’s role continues to be important and other additional insights may continue to emerge as aspects of the crisis continue to evolve, including mortgage foreclosures and how best to continue to stabilize housing markets. Assistance that the federal government provided in response to the recent financial crisis highlights the challenges associated with government intervention in private markets. Building on lessons learned from the financial crises of the 1970s and 1980s, we identified guiding principles at that time that help to serve as a framework for evaluating large-scale federal assistance efforts and provided guidelines for assisting failing companies, including the government’s actions during the most recent crisis. These principles include (1) identifying and defining the problem, (2) determining national interests and setting clear goals and objectives that reflect them, and (3) protecting the government’s interests. The government generally adhered to these principles during this recent crisis. But because of its sheer size and scope, the crisis presented unique challenges and underscored a number of lessons to consider when the government provides broad-based assistance. First, widespread financial problems, such as those that occurred in this crisis, require comprehensive, global actions that must be closely coordinated. For example, Treasury’s decision to provide capital investments in financial institutions was driven in part by similar actions in other countries. Second, the government’s strategy for managing its investments must include plans to mitigate perceived or potential conflicts that arise from the government’s newly acquired role as shareholder or creditor and its existing role as regulator, supervisor, or policymaker. Acquiring an ownership interest in private companies can help protect taxpayers by enabling the government to earn returns when it sells its shares and the institutions repurchase their shares or redeem their warrants. But this scenario can also create the potential for conflict if, for example, public policy goals are at odds with the financial interests of the firm receiving assistance. Further, the federal government’s intervention in private markets requires that those efforts be transparent and effectively communicated so that citizens understand policy goals, public expenditures, and expected results. The government’s actions in the recent crisis have highlighted the challenges associated with achieving both. The government also needs to establish an adequate oversight structure to help ensure accountability. Finally, the government must take steps to mitigate the moral hazard that can arise when it provides support to certain entities that it deems too big or too systemically significant to fail. Such assistance may encourage risk-taking behavior in other market participants by encouraging the belief that the federal government will always be there to bail them out. Building on lessons learned from the financial crises of the 1970s and 1980s, we identified guiding principles to help serve as a framework for evaluating large-scale federal assistance efforts and provided guidelines for assisting failing companies. Identifying and defining the problem, including separating issues that require immediate response from longer-term structure issues. Determining national interests and setting clear goals and objectives that reflect them. Protecting the government’s, and thus the taxpayer’s, interests by working to ensure not only that financial markets continue to function effectively, but also that any investments made provide the highest possible return. This includes requiring concessions from all parties, placing controls over management, obtaining collateral when feasible, and being compensated for risk. During the recent financial crisis, the government faced a number of challenges in adhering to these three principles—which we identified during earlier government interventions in the private markets—when it provided financial assistance to troubled companies. First, the scope and rapid evolution of this crisis complicated the process of identifying and defining the problems that needed to be addressed. Unlike past crises that involved a single institution or industry, the recent crisis involved problems across global financial markets, multiple industries, and large, complex companies and financial institutions. For example, problems in mortgage markets quickly spread to other financial markets and ultimately to the broader economy. As the problems spread and new ones emerged, the program goals Treasury initially identified often seemed vague, overly broad, and conflicted. Further, because the crisis affected many institutions and industries, Treasury’s initial responses to each affected institution often appeared ad hoc and uneven, leading to questions about its strategic focus and the transparency of its efforts. During a financial crisis, identifying and defining problems involves separating out those issues that require an immediate response from structural challenges that will take longer to resolve. The most recent crisis evolved as the crisis unfolded and required that the government’s approach change in tandem. Treasury created several new programs under TARP to address immediate issues, working to stabilize bank capital in order to spur lending and restart capital markets and seeking ways to help homeowners facing foreclosure. While banks have increased their capital levels and these companies have begun repaying the government assistance, constructing relevant solutions to address the foreclosure crisis has proved to be a long-term challenge. The recently enacted financial services reform legislation requires that systemically important financial companies be subject to enhanced standards, including risk- based capital requirements, liquidity requirements, and leverage limits that are stricter than the standards applicable to companies that do not pose similar risk to financial stability. Also, the law creates a procedure for the orderly liquidation of financial companies if the Secretary of the Treasury makes certain determinations including a determination that the failure of the company and its resolution under otherwise applicable law would have serious adverse effect on financial stability. Second, determining national interests and setting clear goals and objectives that reflect them requires choosing whether a legislative solution or other government intervention best serves the national interest. During the recent crisis the federal government determined that stabilizing financial markets, housing markets, and individual market segments required intervening to support institutions it deemed to be systemically significant. It also limited its intervention, stating that it would act only as a reluctant shareholder and not interfere in the day-to- day management decisions of any company, would exercise only limited voting rights, and would ensure that the assistance provided would not continue indefinitely. Further, Treasury emphasized the importance of having strong boards of directors to guide these companies, as discussed earlier. While the U.S. government developed goals or principles for holding large equity interest in private companies, its goals for managing its investment have at times appeared to conflict with each other. Specifically, Treasury announced that it intended to protect the taxpayer investment, maximize overall investment returns and that it also intended to dispose of the investments as soon it was practicable to do so. However, protecting the taxpayer investment may be at odds with divesting as soon as possible. For example, holding on to certain investments may bring taxpayers a higher return than rapid divestment. Recognizing the tension among these goals, Treasury has tried to balance these competing interests but ultimately, it will have to decide which among them is most important by evaluating the trade-offs. Finally, protecting the government’s and taxpayers’ interest is an essential objective when creating large-scale financial assistance programs that put government funds and taxpayer dollars at risk of loss. Generally consistent with this principle, the government took four primary actions that were designed to minimize this risk. First, a priority was gaining concessions from others with a stake in the outcome—for example, from management, labor, and creditors—in order to ensure cooperation in securing a successful outcome. As we have pointed out previously, as a condition of receiving federal financial assistance, TARP recipients (AIG, Bank of America, Citigroup, GMAC, Chrysler, and GM) had to agree to limits on executive compensation and dividend payments, among other things. Moreover, GM and Chrysler had to use their “best efforts” to reduce their employees’ compensation to levels similar to those at other major automakers that build vehicles in the United States, which resulted in concessions from the United Auto Workers on wages and work rules. Second, exerting control over management became necessary in some cases—including approving financial and operating plans and new major contracts—so that any restructuring plans would have realistic objectives and hold management accountable for achieving results and protecting taxpayer interests. For example, under AIFP, Chrysler and GM were required to develop restructuring plans that outlined their path to financial viability. The government initially rejected both companies’ plans as not being aggressive enough but approved revised plans that included restructuring the companies through bankruptcy. The Federal Reserve has also reviewed AIG’s divestiture plan and routinely monitors its progress and financial condition. Finally, as conservator FHFA maintains substantial control over the business activities of the Enterprises. Third, the government sought to ensure that it was in a first-lien position with AIG, GM, and Chrysler, which received direct government loans, in order to recoup the maximum amounts of taxpayer funds. Treasury was not able to fully achieve this goal in the Chrysler initial loans because the company had already pledged most of its collateral, leaving little to secure the federal government’s loans. Treasury was however able to obtain a priority lien position with respect to its loan to Chrysler post-restructuring. FRBNY was able to obtain collateral against its loans to AIG. Fourth, the government sought compensation for risk through fees and equity participation, routinely requiring dividends on the preferred shares it purchased, charging fees and interest on the loans, and acquiring preferred shares and warrants that provided equity. For example, the government required Bank of America and Citigroup to provide warrants to purchase either common stock or additional senior debt instruments, such as preferred shares, under their financial agreements. As a condition for providing a $85 billion revolving loan commitment, for example, FRBNY initially required that AIG pay an initial gross commitment fee of 2 percent (approximately $1.7 billion) and interest on the outstanding balance, plus a fee on the unused commitment, and in exchange, issue preferred shares (convertible to approximately 79.8 percent of issued and outstanding shares of common stock) into a trust for the benefit of the U.S. Treasury. Treasury’s contractual agreements with the Enterprises detail the terms of the preferred shares, and require them to pay commitment fees, but Treasury has not implemented these fees due to the Enterprises’ financial condition. The size and scope of the recent crisis were unprecedented and created challenges that highlighted principles beyond those based upon the lessons learned from the 1970s and 1980s. These include ensuring that actions are strategic and coordinated both nationally and internationally, addressing conflicts that arise from the government’s often competing roles and the likelihood of external influences, ensuring transparency of and communicating effectively with the Congress and the public, ensuring that a system of accountability exists for actions taken, and taking measures to reduce moral hazard. Financial crises that are international in scope require comprehensive, global actions and government interventions that must be closely coordinated by the parties providing assistance—including agencies of the U.S. government as well as foreign governments—to help ensure that limited resources are used effectively. In prior work, we reported that overseeing large financial conglomerates has proven challenging, particularly in regulating their consolidated risk management practices and identifying and mitigating the systemic risks they pose. Although the activities of these large firms often cross traditional sector boundaries, financial regulators under the current U.S. regulatory system have not always had full authority or sufficient tools and capabilities to adequately oversee the risks that these financial institutions posed to themselves and other institutions. We have laid out several elements that should be included in a strengthened regulatory framework, including using international coordination to address the interconnectedness of institutions, operating cross borders, and helping ensure regulatory consistency to reduce negative, competitive effects. Initial actions during the crisis were taken and coordinated by the Federal Reserve, Treasury, and FDIC, and some were made in conjunction with similar actions by foreign governments. For example, the United States and several foreign governments took a variety of actions including providing liquidity and capital infusions and temporarily banning the short selling of financial institution stock. On September 6, 2008, initial government actions that were taken to support the Enterprises were due to their deteriorating financial condition, with worldwide debt and other financial obligations totaling $5.4 trillion, and their default on those obligations would have significantly disrupted the U.S. financial system and the global system. Shortly afterwards, as several other large financial firms came under heavy pressure from creditors, counterparties, and customers, the Federal Reserve used its authority under Section 13(3) to create several facilities to support the financial system and institutions that the government would not have been able to assist without triggering this authority, prior to the creation of TARP. The global nature of these companies added to the challenges for the federal government and international community as it resolved these issues. Concerted federal government attempts to find a buyer for the company or to develop an industry solution for Lehman Brothers failed to address its financing needs. According to Federal Reserve officials, the company’s available collateral was insufficient to obtain a Federal Reserve secured loan of sufficient size to meet its funding needs. In the case of AIG, after contacting the FRBNY on September 12, 2008, the U.S. government took action because of its relationships with other global financial institutions and coordinated with regulators in a number of countries. According to AIG’s 2008 10-K, AIG had operations in more than 130 countries and conducted a substantial portion of its general insurance business and a majority of its life insurance business outside the United States. Because of its global reach, the company was subject to a broad range of regulatory and supervisory jurisdictions, making assisting the company with its divestment plans extremely difficult. In light of AIG’s liquidity problems, AIG and its regulated subsidiaries were subject to intense review, with multiple foreign regulators taking supervisory actions against AIG. On September 16, 2008, the Federal Reserve and Treasury determined that the company’s financial and business assets were adequate to secure an $85 billion line of credit, enough to avert its imminent failure. In October 2008, in an unprecedented display of coordination, six central banks—the Federal Reserve, European Central Bank, Bank of England, Swiss National Bank, Bank of Canada, and the central bank of Sweden— acted together to cut short-term interest rates. In a coordinated response, the Group of Seven finance ministers and central bank governors announced comprehensive plans to stabilize their banking systems— making a critical promise not to let systemically important institutions fail by offering debt guarantees and capital infusions, and increasing deposit insurance coverage. Within 2 weeks of enacting TARP, consistent with similar actions by several foreign governments and central banks, Treasury—through the newly established Office of Financial Stability—announced that it would make available $250 billion to purchase senior preferred shares in a broad array of qualifying institutions to provide additional capital that would help enable the U.S. institutions to continue lending. Treasury provided $125 billion in capital purchases for nine of the largest public financial institutions, including Bank of America and Citigroup, considered by the federal banking regulators and Treasury to be systemically significant to the operation of the financial system. Together these nine financial institutions held about 55 percent of the U.S. banking assets and had significant global operations—including retail and wholesale banking, investment banking, and custodial and processing services—requiring coordinated action with a number of foreign governments. The government’s ownership of common shares in private companies can create various conflicts and competing goals that must be managed. First, having an ownership interest in a private company gives the government voting rights that can influence the firm’s business activities. However, Treasury has limited its voting rights to only matters that directly pertain to its responsibility under EESA to manage its investments in a manner that protects the taxpayer. For example, Treasury used its voting rights elect directors to Citigroup’s board, approve the issuance of common shares, and a reverse stock split. Likewise, Treasury has designated directors to serve on Chrysler, GM, and GMAC’s boards of directors. Second, when the government is both investor and regulator for the same company, federal agencies may find themselves in conflicting roles. For instance, as noted in our April 2010 report on Chrysler and GM pensions, until Treasury either sells or liquidates the equity it acquired in each company, the government’s role as shareholder creates potential tensions with its roles as pension regulator and insurer. This can be illustrated by the conflicting pressures that would likely arise in two critical and interrelated scenarios: (1) how to decide when to sell the government’s shares of stock and (2) how to respond to a decline in pension funding. If either or both companies return to profitability then the government’s multiple roles are less likely to result in any perceived conflicts. However, if either company had to be liquidated, the government would face these perceived conflicts, because Treasury would have to make decisions relating to the value of its investments and the Pension Benefit Guaranty Corporation would need to make decisions related to the companies’ pensions. Additionally, on December 11, 2009, the Internal Revenue Service, a bureau within Treasury, issued a notice stating that under certain circumstances selling stock that Treasury received under any TARP program would not trigger an ownership change. As a result, when Treasury sells such shares there is no change in ownership for tax purposes, and the companies would not be required to make changes that limit net operating losses after a change in ownership. Some in Congress have argued that this action created an additional subsidy to the financial institutions that received federal assistance and by reducing potential revenue from taxes, it conflicts with Treasury’s duty to take actions that are in the best interest of the taxpayers. The assistance to the Enterprises illustrates the potential challenges that can arise when the government uses its assistance to further its public policy goals—in this case, managing support for the home mortgage markets and efforts to preserve and conserve assets. Specifically, Treasury is pursuing public policy goals to address mortgage foreclosures through the Enterprises, but these actions could also potentially negatively affect the Enterprises’ financial condition. For example, the Enterprises are participating in the administration’s foreclosure prevention programs by modifying the terms of mortgages insured or owned by the Enterprises to prevent avoidable foreclosures by lowering the borrower’s monthly mortgage payments. Treasury and FHFA have argued that such programs, by improving borrowers’ financial condition, will also benefit the Enterprises, which have large holdings of delinquent mortgages. However, the Enterprises have stated in their financial disclosures that these programs may result in significant costs over time, such as incentive payments made to servicers and borrowers over the life of the modification and losses associated with borrower redefaults on modified mortgages. Whether loan modifications would benefit both borrowers and the Enterprises or further jeopardize the Enterprises’ financial condition is unknown and may depend in part on how the program is implemented and overseen by FHFA and Treasury over time. Overseeing the programs aimed at reducing costs to taxpayers remains a challenge. Being both a creditor and a shareholder in private companies creates another conflict for the government. As a major creditor, the government is more likely to be involved in an entity’s operations than it is if it is acting only as a shareholder, and operational decisions that it imposes could affect returns on taxpayer investments. For example, the government is currently both a creditor and shareholder in Chrysler and was both a creditor and shareholder in GM until GM repaid its $6.7 billion loan on April 20, 2010. Treasury made initial loans to the companies to help them avert bankruptcy, then provided financing that was converted to equity to help them through the bankruptcy and restructuring process. As a creditor, the government obtained rights to impose requirements on the companies’ business, including requiring them to produce a certain portion of their total production in the United States. These requirements established by Treasury as creditor, could negatively affect the companies’ stock price, which in turn could negatively affect the return on investment earned by Treasury, as a shareholder. To manage its different investments, the government has used different strategies—direct management and a trust arrangement—which have different implications for the government and the private companies that may affect how easily it can address conflicts of interest. Directly managing the investments offers two significant advantages. First, it affords the government the greatest amount of control over the investment. Second, having direct control over investments better enables the government to manage them as a portfolio, as Treasury has done under CPP. However, such a structure also has disadvantages. For example, as we have seen, having the government both regulate a company and hold an ownership interest in it can create a real or perceived conflict of interest. A direct investment also requires that the government have staff with the requisite skills to manage it. For instance, as long as Treasury maintains direct control of its equity investment in Citigroup, Chrysler, and GM, among others, it must have staff or hire contractors with the necessary expertise in these specific types of companies. In previous work, we raised concerns about Treasury’s ability to retain the needed expertise to assess the financial condition of the auto companies and develop strategies to divest the government’s interests given the substantial decline in its staff resources and lack of dedicated staff providing oversight of its investments in the automakers. In contrast, the government has used a trust arrangement to manage its investment in AIG. Such an arrangement puts the government’s interest in the hands of an independent third party and helps to avoid potential conflicts that could stem from the government having both regulatory responsibilities for and ownership interests in a company. A trust also helps mitigate perceptions that actions taken with respect to TARP recipients are politically motivated or based on any “inside information” received from the regulators. While Treasury has interpreted TARP as prohibiting placing TARP assets in a trust structure, FRBNY created a trust to manage the government’s ownership interest in AIG before TARP was established. Finally, the varied and sometimes conflicting roles of the government as an owner, creditor, regulator, and policymaker also potentially subject private companies to greater government scrutiny and pressure than they might have otherwise experienced. In particular, the government’s investments in these companies increases the level of government and public oversight and scrutiny these companies receive, as policymakers, elected officials, and regulators work to ensure that taxpayer interests are protected. The companies may also be subject to pressure from government officials to reconsider or alter business decisions that affect the companies’ bottom lines. For example, Chrysler and GM faced pressure to reinstate many of the auto dealerships that had been slated for closure. Government involvement could come from many different sources and in many different forms, including legislative actions and direct communications. To gauge the nature and scope of external influences, we interviewed officials from the six companies that received exceptional financial assistance and reviewed legislation that would place requirements or restrictions on these companies. We also reviewed letters sent to Chrysler and GM officials from legislative and executive branch officials and selected state government officials. We found that the issues receiving the most congressional scrutiny were executive compensation, transparency and accountability, mortgage modifications, and closures of automobile dealerships. Executive compensation. We identified 24 bills that members of Congress introduced in calendar years 2008 and 2009 involving restrictions on executive compensation or additional taxation of executive compensation at companies receiving TARP assistance. Also, AIG officials stated that the majority of congressional contacts they received related to executive compensation and bonuses. Transparency and accountability. We identified 16 bills introduced in calendar years 2008 and 2009 that would require the companies to take steps that would result in increased transparency or accountability, such as reporting on how TARP funds were used. For example, the TARP Transparency Reporting Act would require TARP recipients to report to Treasury on their use of TARP funds. Mortgage modifications. Officials from the companies whose business includes mortgage financing told us that one of the most common subjects of congressional correspondence was requests for modifications to specific constituents’ mortgages. Automobile dealerships. About 60 percent of the bills we identified that specifically targeted the auto industry sought to curtail or prevent the closure of automobile dealerships. One of these bills, which established an arbitration process for dealerships that want to appeal a closure decision, became public law. Furthermore, according to letters from members of Congress that Chrysler and GM provided to us, dealership closure was the most common subject. The letters usually either asked for an explanation of how the closure decisions had been made or for reconsideration of the closure of a particular dealership. (See appendix III for more information on the nature and scope of communication with the auto industry.) Company officials we interviewed told us that the level of government involvement—from requests for appearances at congressional hearings to letters from elected officials—had increased since their companies had requested and received financial assistance from the government. Company officials told us that this involvement was to be expected and did not cause them to make decisions that were in conflict with their respective companies’ best interests. However, these officials also stated that addressing the government’s involvement, such as responding to letters or requests for information, required increased company resources. Federal government intervention in private markets not only requires that these efforts be transparent but also requires that the action include a strategy to help ensure open and effective communication with stakeholders, including Congress and taxpayers. The government’s actions in the recent crisis have highlighted the challenges associated with achieving both of these objectives. Throughout the crisis, Congress and the public often stated that the government actions appeared vague, overly broad, and conflicted. For example, Treasury’s initial response to the crisis focused on providing assistance to individual institutions and appeared ad hoc and uneven, leading to questions about its strategic focus and the transparency of its efforts. Specifically, questions about the government’s decision to assist Bear Stearns and AIG, but not Lehman Brothers, continued months after the decisions were made. Moreover, while TARP was created to provide a comprehensive approach to addressing the unfolding crisis, Treasury’s decision to change the focus of the program weeks after the passage of EESA from purchasing mortgage-backed securities and whole loans to injecting capital into financial institutions caught many in Congress, the markets, and the public by surprise and adversely affected these parties understanding of the program’s goals and priorities which may have undermined the initial effectiveness of the program. In general, transparency means more than simply reporting available information to interested parties, it involves such things as providing clearly articulated guidelines, decision points, and feedback mechanisms to help ensure an adequate understanding of the matters at hand. For the recent actions, transparency would include providing information on how the companies were to be monitored and the results of those activities. However, when considering any federal intervention, part of this decision- making process includes identifying what information can and should be made public and balancing concerns about the public’s “need to know” against disclosing proprietary information in a competitive market. For example, while disclosing detailed information about Treasury’s plans to sell shares of company stock may not be appropriate, the government should communicate its purpose in intervening in the private market and approach for evaluating the success of any federal action. Specifically, making information available to the public on the purpose of federal intervention and the decision to intervene could help ensure that the public understands the implications of not intervening and the expected results from the government’s actions. While EESA required Treasury to report information about TARP activities, Treasury’s failure to adequately communicate the rationale for its actions and decisions early on caused confusion about the motivations behind these actions and decisions and long plagued the program. Treasury’s lack of an effective communication strategy was, in part, the result of the unfolding nature of the crisis but even so, the nature of the unfolding crisis was not effectively communicated. For example, the multifaceted nature of the crisis resulted in numerous TARP programs to address specific problems in the markets; however, Treasury did not establish or adequately explain some of the programs until after assistance had already been announced. Specifically, Treasury announced assistance to Citigroup, Bank of America, and AIG before TIP and SSFI—now called the AIG Assistance Program—were established and announced in January 2009 and November 2008, respectively. Since the inception of TARP, we have recommended that Treasury take a number of actions aimed at developing a coherent communication strategy for TARP. In our previous reports, we have recommended that Treasury develop a communication strategy that included building an understanding and support for the various components of the TARP program. While the actions we suggested were intended to address challenges associated with TARP— such as hiring a communications officer, integrating communications into TARP operations, scheduling regular and ongoing contact with congressional committees and members, holding town hall meetings with the public across the country, establishing a counsel of advisors, and leveraging available technology—most of these suggestions would be applicable when considering a communication strategy for any federal intervention. An effective communication strategy is especially important during rapidly changing market events and could help the public understand the policy goals that the government was trying to achieve and its rationale for spending public funds. When considering government assistance to private companies, providing accountability for taxpayer funds is imperative. The absence of a system for accountability increases the risk that the interests of the government and taxpayers may not be adequately protected and that the programs’ objectives may not be achieved efficiently and effectively. We first highlighted the importance of accountability in implementing TARP in December 2008, which has been reiterated by Congressional Oversight Panel and SIGTARP. Specifically, we noted the importance of establishing oversight structures, including monitoring and other internal controls that can help prevent and detect fraud. Federal action in the midst of a crisis will undoubtedly require that actions be taken at the same time that programs are being established. In December 2008, we reported that a robust oversight system with internal controls specifically designed to deal with the unique and complex aspects of TARP would be key to helping OFS management achieve the desired results. For example, OFS faced the challenge that it needed to develop a comprehensive system of internal controls at the same time that it was reacting quickly to changing financial market events and establishing the program. One area that took time to develop was establishing a plan to help ensure that participating institutions adhered to program requirements or to monitor companies’ compliance with certain requirements, such as executive compensation and dividend restrictions. Therefore, when making any decision to intervene in private markets, Congress and the government must take efforts to provide an appropriate oversight structure. While the federal government’s assistance may have helped to contain a more severe crisis by mitigating potential adverse systemic effects, it also created moral hazard—that is, it may encourage market participants to expect similar emergency actions, thus weakening private or market-based incentives to properly manage risks and creating the perception that some firms are too big to fail. We recently reported that while assisting systemically significant failing institutions may have helped to contain the crisis by stabilizing these institutions and limiting potentially systemic problems, it also may have exacerbated moral hazard. According to regulators and market observers, such assistance may weaken the incentives for large uninsured depositors, creditors, and investors to discipline large complex firms that are deemed too big to fail. In March 2009, Federal Reserve Chairman Bernanke told the Council on Foreign Relations that market perceptions that a particular institution is considered too big to fail has many undesirable effects. He explained that such perceptions reduce market discipline, encourage excessive risk-taking by the firm, and provide artificial incentives for firms to grow. He also noted these beliefs do not create a level playing field, because smaller firms may not be regarded as having implicit government support. Similarly, others have noted how such perceptions may encourage risk-taking. For example, some large financial institutions may be given access to the credit markets at favorable terms without consideration of their risk profile. Before a financial crisis, the financial regulatory framework could serve an important role in restricting the extent to which institutions engage in excessive risk-taking activities resulting from weakened market discipline. For instance, regulators can take pre-emptive steps to mitigate moral hazard by taking the necessary regulatory actions to help ensure that companies have adequate systems in place to monitor and manage risk taking. Any regulatory actions that the government takes to help ensure strong risk management systems at companies of all sizes would help to lessen the need for government intervention. In general, mitigating moral hazard requires ensuring that any government assistance includes terms that make it a last resort and undesirable except in the most dire circumstances and specifying when the government assistance will end. During the recent crisis, the government has included provisions that attached such costs to the provision of assistance, including limiting executive compensation, requiring dividends, and acquiring an ownership interest. Further, while uncertainty about the duration of the crisis makes it difficult to specify timetables for phasing out assistance and investments, it is important to provide a credible “exit strategy” to prevent further disruption in the financial markets when withdrawing government guarantees. While Treasury has articulated its exit strategy for some of the companies we reviewed, the government’s plans for divesting itself of investments in AIG and the Enterprises are less clear. Because the government’s involvement in the private sector creates moral hazard and perpetuates the belief that some institutions are too big or interconnected to fail, critics expressed concern that it can encourage risk-taking. While the debate about whether the government should intervene in private markets to avert a systemic crisis continues, only the future will reveal whether the government will again be faced with the prospect of having to intervene in private markets to avert a systemic crisis. As with other past crises, experience from the most recent crisis offers additional insights to guide government action, should it ever be warranted. Specifically, the government could protect the taxpayer’s interest in any crisis by not only continuing to follow the principles that we have discussed earlier (i.e., identifying and defining the problem, determining a national interest and setting clear goals, and protecting the government’s and taxpayer’s interests) but also by adhering to five additional principles based on the federal government’s experience with the current crisis. Develop a strategic and coordinated approach when comprehensive and global governmental action is required. Take actions to ensure the government has a strategy for managing any investments resulting from its intervention in order to help mitigate perceived or potential conflicts and manage external influence. Ensure that actions are transparent and effectively communicated to help ensure that the public understands what actions are being taken and for what purpose. Establish an adequate oversight structure to ensure accountability. Take steps to mitigate moral hazard by not only ensuring that regulatory and market-based structures limit risk taking before a crisis occurs, but also by creating strong disincentives to seek federal assistance through utilization of stringent requirements. We provided a draft of this report to FHFA, the Federal Reserve, OFS, OCC, and FDIC for their review and comment. In addition, we provided excerpts of the draft of this report to the companies receiving exceptional assistance—AIG, AIG Trust, Bank of America, Chrysler, Citigroup, and GMAC—to help ensure the accuracy of our report. Treasury and FHFA provided us with written comments which are reprinted in appendices IV and V, respectively. Treasury agreed with the report’s overall findings. In its letter, Treasury acknowledged that the additional guiding principles for providing large-scale federal assistance should be considered in any future broad-based government assistance and agreed to weigh these new principles going forward. FHFA, in its letter, acknowledged, as we pointed out in our report, the financial assistance provided to the Enterprises illustrates the potential challenges that can arise when the government uses its assistance to further its public policy goals, particularly the Enterprises’ participation in the administration’s loan modification efforts, such as HAMP. However, the letter noted that the loan modification efforts are central to the goals of the conservatorships and EESA. The letter further explained that efforts like HAMP may help to mitigate the credit losses of the Enterprises because a loan modification is often a lower cost resolution to a delinquent mortgage than foreclosure. The Federal Reserve, FHFA, and Treasury provided us with technical comments that we incorporated as appropriate. In addition, AIG, the AIG Trust, Bank of America, Chrysler, Citigroup, and GMAC also provided us with technical comments that we incorporated as appropriate. We are sending copies of this report to interested congressional committees and members. In addition, we are sending copies FHFA, the Federal Reserve, Treasury, OCC, FDIC, financial industry participants, and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Orice Williams Brown at (202) 512-8678 or williamso@gao.gov. Contact points for GAO’s Office of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made major contributions to this report are listed in appendix VI. The objectives of our report were to (1) describe how and why the government obtained an ownership interest in the companies, (2) evaluate the extent of government involvement in companies receiving exceptional assistance, (3) describe the government’s monitoring of the companies’ financial viability and exit strategies, and (4) discuss the implications of the government’s ongoing involvement in the companies. The report focused on companies receiving exceptional assistance from the federal government, including American Insurance Group (AIG), Bank of America Corporation (Bank of America), Chrysler Group LLC (Chrysler), Citigroup, Inc. (Citigroup), General Motors Company (GM), and GMAC, Inc. (GMAC), as well as its involvement in Fannie Mae and Freddie Mac (Enterprises). To address the first objective, we reviewed the monthly transactions reports produced Department of Treasury’s (Treasury) Office of Financial Stability (OFS) that lists the structure of federal assistance provided by Treasury to the companies considered receiving exceptional assistance (AIG, Bank of America, Chrysler, Citigroup, and GM) and documentation from Federal Housing Finance Agency (FHFA) to determine the financing structure for the Enterprises. In addition, we reviewed the Board of Governors of the Federal Reserve System’s (Federal Reserve) “Factors Affecting Reserve Balances” H.4.1 documents to determine the assistance provided by Federal Reserve Bank of New York (FRBNY) to AIG. We reviewed the contractual agreements between the government and the companies that governed the assistance. In addition, we reviewed selected Securities Exchange Commission (SEC) filings, Treasury’s Section 105 (a) reports, and other GAO reports on the Troubled Asset Relief Program (TARP). To address the second objective, we reviewed the Emergency Economic Stabilization Act of 2008 (EESA) and the Housing and Economic Recovery Act of 2008 (HERA) to understand the legal framework for any potential government involvement in the companies receiving exceptional assistance, including the establishment of the conservatorship and the contractual agreements established between the government and the companies. We reviewed the credit agreements, securities purchase agreements; assets purchase agreements, and master agreements. To understand the trust structure established for AIG we reviewed the AIG Credit Trust Facility agreement between FRBNY and the AIG trustees. We conducted interviews with officials and staff from the Federal Reserve Board, FHFA, FRBNY, Federal Reserve Bank of Chicago, (FRB-Chicago), Federal Reserve Bank of Richmond, (FRB-Richmond), OFS, the Office of the Comptroller of the Currency (OCC), Federal Deposit Insurance Corporation (FDIC), and SEC. In addition, we interviewed senior management—primarily the Chief Executive Officers and the Chief Financial Officers—for most of the companies in our study, including the Enterprises, and interviewed the AIG trustees to understand their role in the governance of AIG. To address the third objective on evaluating the government’s monitoring of the companies’ financial viability and exit strategies, we interviewed officials from FDIC, Federal Reserve, FHFA, FRBNY, FRB-Chicago, FRB- Richmond, OCC, and OFS. We also interviewed the asset managers who are responsible for monitoring and valuing the equity shares held by Treasury under the Capital Purchase Program, the Targeted Investment Program and the Asset Guarantee Program. We reviewed Treasury documents, such as asset manager reports, TARP transaction reports, and press releases; Treasury testimonies; and press releases from the companies. We also reviewed the contractual agreements between the government and the companies including credit agreements, securities purchase agreements, asset purchase agreements, and master agreements in order to understand the companies’ responsibilities in reporting financial information and the government’s responsibility for monitoring and divesting its interests. Finally, we reviewed a Congressional Oversight Panel report relating to Treasury’s approach on exiting TARP and unwinding its impact on the financial markets. To address the fourth objective relating to the implications of the government’s ongoing involvement in the companies, we reviewed prior GAO work on principles for providing large-scale government assistance and assessed the degree to which the government’s activities under TARP adhered to these principles. To identify actions the government is taking with the potential to influence the companies’ business decisions, we reviewed legislation that would affect TARP recipients and determined what, if any action the legislation would require the companies to take. To identify the nature and scope of contacts TARP recipients received from executive branch agencies, members of Congress, and state government officials, we interviewed government relations staff at AIG, Bank of America, Chrysler, Citigroup, GM, and GMAC. These interviews also provided us with information on the extent of government involvement and influence in the companies’ business operations. For Chrysler and GM, we obtained 277 letters that the companies received from members of Congress, which was the number of letters the companies received during calendar year 2009 and kept on file. We reviewed each of the letters to determine their topic and whether they sought to influence the companies’ business decisions. We also obtained more than 2,300 e-mails that certain senior executives of Chrysler and GM received from congressional and state government officials during calendar year 2009, including 1,221 from Chrysler and 1,098 from GM. Due to the large number of these e-mails, we reviewed a random probability sample of 251 from the 2,319 e-mails the companies provided us with to create estimates about the population of all the e-mails. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as having a margin of error at the 95 percent confidence level of plus or minus 8 percentage points or less. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. With this probability sample, each member of the study population had a nonzero probability of being included, and that probability could be computed for any member. Finally, we obtained 264 e-mails that certain senior executives at the companies received from White House and Treasury officials in calendar year 2009. After removing e-mails that were out of scope and duplicates, we were left with 109 e-mails, including 89 sent to Chrysler and 20 sent to GM. We reviewed these e-mails to determine their purpose and topic and whether they sought to influence the companies’ business decisions. We provided a draft of this report to FHFA, the Federal Reserve, OFS, OCC, and FDIC for their review and comment. In addition, we provided excerpts of the draft of this report to the companies receiving exceptional assistance—AIG, AIG Trust, Bank of America, Chrysler, Citigroup, and GMAC—to help ensure the accuracy of our report. We conducted this performance audit from August 2009 to August 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provided a reasonable basis for our findings and conclusions based on our audit objectives. Since the fall of 2008, a number of large financial institutions and companies have received more than $447 billion in financial assistance leaving the government with a significant ownership interest in a number of companies. The government provided assistance or funds to American International Group (AIG); Bank of America Corporation (Bank of America); Chrysler; Citigroup, Inc (Citigroup); Fannie Mae and Freddie Mac (Enterprises); General Motors (GM); and GMAC, Inc (GMAC). As of March 31, 2010, the government owned substantial amounts of preferred or common shares in seven companies—AIG, Chrysler, Citigroup, GM, GMAC, and the Enterprises. The total amounts of assistance disbursed to each company are shown in figure 2. The federal government assisted these companies by infusing capital through the purchase of preferred shares, direct loans, guarantees, stock exchanges, or lines of credit that led to the government owning preferred and common shares. Figure 3 shows the variation in the amount of government ownership interest in the companies and the outstanding balance that is owed to the government. The financial institutions and the companies have begun to pay down some of the assistance. GM has repaid the entirety of the debt owed to Treasury under its post-bankruptcy credit agreement, and Chrysler has repaid a portion of its loan from Treasury. As previously noted, whether the government will be recovering all its investment or assistance to Chrysler and GM is unknown. For companies where the government has an ownership stake, the amount of recovery depends on a number of external factors, including the financial health of the companies and the market value of their stock as well as the companies’ ability to repay loans or repurchase preferred shares. Similarly, Treasury still holds common shares in Citigroup. The Enterprises have not repaid any portion of the assistance Treasury has provided and as of June 2010 continued to borrow from Treasury. To provide some additional protection for the taxpayer, Treasury required the companies to commit to certain financial terms and actions. For example, in exchange for the capital infusions in the form of preferred shares, Treasury required AIG, Bank of America, Citigroup, the Enterprises, GM, and GMAC to pay dividends. The dividend rate varied across the seven companies ranging from less than 5 percent to 10 percent for AIG and the Enterprises. As shown in table 6, as of March 31, 2010, Treasury had collected a total of more than $16.2 billion in dividends from Bank of America, Citigroup, the Enterprises, GM, and GMAC. AIG was required to pay dividends at an annual rate of 10 percent on series D cumulative preferred shares prior to when they were exchanged for seri E noncumulative preferred shares, but it had not paid any dividends to Treasury as of March 31, 2010. Series D unpaid dividends were capitaliz thereby increasing the liquida tion preference of the Series E shares for which they were exchanged. The government or, in the case of AIG, FRBNY requires that AIG and Chrysler pay interest on the loans provided. Moreover, Treasury currently holds warrants obtained in connection with the preferred shares that it holds for AIG, Citigroup, and the Enterprises. Because GMAC is a privately held company, Treasury exercised its warrants immediately. On March 3, 2010, Treasury received more than $1.5 billion from its auction of Bank of America’s warrants. To further examine the extent of government involvement in companies receiving Troubled Asset Relief Program (TARP) assistance, we reviewed legislative proposals and government communications with General Motors Company (GM) and Chrysler Group LLC (Chrysler). We examined the following: (1) proposed legislation that would place requirements or restrictions on the companies due to their status as TARP recipients, (2) letters from members of Congress to the companies, and (3) e-mails from congressional offices, state government, White House, and Department of the Treasury (Treasury) officials sent to certain company officials whom we designated. Chrysler and GM officials told us that the level of government involvement—from requests for appearances at congressional hearings to letters from elected officials—had increased since their companies had requested and received financial assistance from the government. They emphasized that the congressional letters and e-mails did not cause them to make decisions that were in conflict with their best interests. However, these officials stated that addressing the government’s involvement, such as responding to letters, audits, or other requests for information, required increased company resources. We identified 38 bills introduced from October 2008, when the Emergency Economic Stabilization Act of 2008 (EESA) was enacted, through January 2010 that would impose requirements or restrictions on GM and Chrysler as TARP recipients. Action on the majority of these bills has been limited since their introduction in Congress, with two having become law. Although the bills cover a range of topics, those among the most commonly addressed by the legislation were dealership closures and executive compensation and bonuses. We identified eight bills that addressed, among other issues, the closure of auto dealerships, a topic specifically directed at automakers accepting TARP funds. Closing dealerships was a way for the companies to reduce their operating costs in an attempt to return to profitability, but since these closures would occur in communities across the country, they prompted considerable congressional interest. The bills generally aimed to curtail or prevent the closure of auto dealerships, as well as plants and suppliers. One of the bills that became public law requires Chrysler and GM to provide to the dealers specific criteria for the closures and gives dealers the right to pursue binding arbitration concerning their closures. The Automobile Dealer Economic Rights Restoration Act of 2009, as introduced in the House and Senate, would require the automakers to restore a dealership based on the dealer’s request. As of July 30, 2010, this bill has not been enacted. We identified 17 bills affecting executive compensation and bonuses for TARP recipients in both the auto and financial industries. Most of these bills would require restrictions on or repeals of executive compensation and bonuses for TARP recipients. For example, the American Recovery and Reinvestment Act, which became law in February 2009, calls for, among other things, limits on compensation to the highest paid executives and employees at firms receiving TARP funding. Other less commonly addressed topics and an example of a bill related to each category are shown in table 7. As of July 30, 2010, these bills have not been enacted. Between May and December 2009, Chrysler and GM received 277 letters from members of Congress, including 65 sent to Chrysler and 212 to GM. Company officials told us that the volume of congressional letters they received sharply increased in the spring of 2009, after the companies received TARP assistance and when many operational changes that were part of their restructuring—such as plant and dealership closures—were being made. In total, 188 individual members of Congress sent letters to the companies over this time period. In terms of the content of the letters, many dealt with specific constituent concerns, with the closing of auto dealerships being the most common topic. Of the letters sent to Chrysler and GM, 68 percent pertained to dealership closures, and the majority of these requested information on specific dealerships in the member’s district or state or provided information for the companies’ consideration when determining whether or not to close specific dealerships. For example, one letter stated that closing a particular dealership would result in customers having to drive up to 120 miles round trip to service their existing vehicle or purchase a new one. Other topics most commonly discussed in the letters included the renegotiation of union contracts with companies that haul cars from manufacturing plants to dealerships (17 percent) and the closure of manufacturing plants (5 percent). None of the letters pertained to executive compensation. Across all letters, 56 percent either explicitly requested a change to the companies’ operations or stated a desired change. Just as dealerships were the focus of most of the letters, dealerships were the focus of the majority of requests for changes as well, with 62 percent suggesting that the companies reconsider the decision to close a particular dealership. The remainder of letters that requested changes pertained to car-hauling contracts (16 percent), plant closures (5 percent), or other business decisions and operations such as the sale of brands (21 percent). We also reviewed e-mails that the companies’ chief executive officers and most senior state and federal government relations officers had received from federal and state officials during calendar year 2009. Our review included e-mails sent by White House officials, the Treasury Department’s chief advisors to the Presidential Task Force on the Auto Industry, members of Congress or their staff, and officials from the five states with the highest proportion of manufacturing in the auto sector. For the purpose of analysis, we grouped the e-mails into federal executive branch officials—Treasury and White House—because these individuals had a defined role in the assistance to the companies, and federal legislative and state officials. For each group, we recorded information on the purpose and topic of each e-mail. According to the documentation the companies provided to us, the designated officials at Chrysler received 89 e-mails from White House and Treasury officials. The designated officials at GM received 20 e-mails. About 60 percent of the e-mails were from Treasury officials and about 40 percent were from White House officials. Sixty-six percent of the e-mails were sent for the purpose of either arranging a call or a meeting between company and government officials (35 percent) or requesting information or input from the companies (31 percent). About 26 percent of the e-mails were sent to provide information to the companies. The topic of more than 33 percent of the e-mails was unclear and more than 60 percent of the e- mails with an unclear topic were sent for the purpose of arranging a call or meeting. Of the e-mails with identifiable topics, the highest number pertained to bankruptcy or restructuring (29 percent of all e-mails) followed by manufacturing plants (12 percent), and dealerships (7 percent). Most of the e-mails that pertained to bankruptcy or restructuring were sent for the purpose of either providing information to or requesting information from the companies (34 percent each). For example, one e- mail requested that Chrysler review and provide comments on a set of talking points on Chrysler’s restructuring. Two of the e-mails—less than 2 percent—requested a change to the companies’ operations or stated a desired change, such as an e-mail concerning GM’s negotiations in a proposed sale of a company asset. Chrysler identified 1,221 e-mails it had received from congressional offices of both parties, mostly from staff, and state officials; GM identified 1,098. Due to the number of e-mails, we reviewed a random probability sample of them in order to develop estimates about the entire group of e-mails. Based on this review, we estimate that 86 percent of these e-mails came from congressional offices and the remaining 14 percent from government officials in the five states included in our analysis. The records in the sample showed that most of the congressional e-mails were sent from staff rather than from members of Congress. The purpose of the vast majority of congressional and state e-mails varied from requesting information to arranging a call or meeting to simply thanking the recipient. Most common were e-mails sent to provide information to the recipient (38 percent), followed by e-mails sent to request information (31 percent), and e-mails to arrange a call or meeting between government and company officials (22 percent). We estimate that 13 percent of the e-mails were sent for other reasons, such as to thank the recipient, or for reasons that could not be determined based on the content of the e-mail. Roughly 1 percent of the congressional and state e- mails—either explicitly requested or stated a desired change to the companies’ operations. The topics of the e-mails varied, with 27 percent focusing on dealerships and 11 percent on manufacturing plants. Thirty- six percent—the largest group—did not reference a specific topic. For example, many of the e-mails sent for the purpose of arranging a call or meeting did not indicate the reason for the requested call or meeting. In addition to the contacts named above, Heather Halliwell, Debra Johnson, Wes Phillips, and Raymond Sendejas (lead Assistant Directors); Carl Barden; Emily Chalmers; Philip Curtin; Rachel DeMarcus; Nancy Eibeck; Sarah Farkas; Cheryl Harris; Grace Haskins; Damian Kudelka; Ying Long; Matthew McDonald; Sarah M. McGrath; Michael Mikota; Susan Michal-Smith; SaraAnn Moessbauer; Marc Molino; Omyra Ramsingh; Christopher Ross; Andrew Stavisky; and Cynthia Taylor have made significant contributions to this report. Troubled Asset Relief Program: Continued Attention Needed to Ensure the Transparency and Accountability of Ongoing Programs. GAO-10- 933T. Washington, D.C.: July 21, 2010. Troubled Asset Relief Program: Treasury’s Framework for Deciding to Extend TARP Was Sufficient, but Could be Strengthened for Future Decisions. GAO-10-531. Washington, D.C.: June 30, 2010. Troubled Asset Relief Program: Further Actions Needed to Fully and Equitably Implement Foreclosure Mitigation Program. GAO-10-634. Washington, D.C.: June 24, 2010. Debt Management: Treasury Was Able to Fund Economic Stabilization and Recovery Expenditures in a Short Period of Time, but Debt Management Challenges Remain. GAO-10-498. Washington, D.C.: May 18, 2010. Financial Markets Regulation: Financial Crisis Highlights Need to Improve Oversight of Leverage at Financial Institutions and across System. GAO-10-555T. Washington, D.C.: May 6, 2010. Troubled Asset Relief Program: Update of Government Assistance Provided to AIG. GAO-10-475. Washington, D.C.: April 27, 2010. Troubled Asset Relief Program: Automaker Pension Funding and Multiple Federal Roles Pose Challenges for the Future. GAO-10-492. Washington, D.C.: April 6, 2010. Troubled Asset Relief Program: Home Affordable Modification Program Continues to Face Implementation Challenges. GAO-10-556T. Washington, D.C.: March 25, 2010. Troubled Asset Relief Program: Treasury Needs to Strengthen Its Decision-Making Process on the Term Asset-Backed Securities Loan Facility. GAO-10-25. Washington, D.C.: February 5, 2010. Troubled Asset Relief Program: The U.S. Government Role as Shareholder in AIG, Citigroup, Chrysler, and General Motors and Preliminary Views on its Investment Management Activities. GAO-10-325T. Washington, D.C: December 16, 2009. Financial Audit: Office of Financial Stability (Troubled Asset Relief Program) Fiscal Year 2009 Financial Statements. GAO-10-301. Washington, D.C.: December 9, 2009. Troubled Asset Relief Program: Continued Stewardship Needed as Treasury Develops Strategies for Monitoring and Divesting Financial Interests in Chrysler and GM. GAO-10-151. Washington, D.C.: November 2, 2009. Troubled Asset Relief Program: One Year Later, Actions Are Needed to Address Remaining Transparency and Accountability Challenges. GAO-10-16. Washington, D.C.: October 8, 2009. Troubled Asset Relief Program: Capital Purchase Program Transactions for October 28, 2008, through September 25, 2009, and Information on Financial Agency Agreements, Contracts, Blanket Purchase Agreements, and Interagency Agreements Awarded as of September 18, 2009. GAO-10-24SP. Washington, D.C.: October 8, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-1048T. Washington, D.C.: September 24, 2009. Troubled Asset Relief Program: Status of Government Assistance Provided to AIG. GAO-09-975. Washington, D.C.: September 21, 2009. Troubled Asset Relief Program: Treasury Actions Needed to Make the Home Affordable Modification Program More Transparent and Accountable. GAO-09-837. Washington, D.C.: July 23, 2009. Troubled Asset Relief Program: Status of Participants’ Dividend Payments and Repurchases of Preferred Stock and Warrants. GAO-09-889T. Washington, D.C.: July 9, 2009. Troubled Asset Relief Program: June 2009 Status of Efforts to Address Transparency and Accountability Issues. GAO-09-658. Washington, D.C.: June 17, 2009. Troubled Asset Relief Program: Capital Purchase Program Transactions for October 28, 2008, through May 29, 2009, and Information on Financial Agency Agreements, Contracts, Blanket Purchase Agreements, and Interagency Agreements Awarded as of June 1, 2009. GAO-09-707SP. Washington, D.C.: June 17, 2009. Auto Industry: Summary of Government Efforts and Automakers’ Restructuring to Date. GAO-09-553. Washington, D.C.: April 23, 2009. Small Business Administration’s Implementation of Administrative Provisions in the American Recovery and Reinvestment Act. GAO-09-507R. Washington, D.C.: April 16, 2009. Troubled Asset Relief Program: March 2009 Status of Efforts to Address Transparency and Accountability Issues. GAO-09-504. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: Capital Purchase Program Transactions for the Period October 28, 2008 through March 20, 2009 and Information on Financial Agency Agreements, Contracts, and Blanket Purchase Agreements Awarded as of March 13, 2009. GAO-09-522SP. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-539T. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-484T. Washington, D.C.: March 19, 2009. Federal Financial Assistance: Preliminary Observations on Assistance Provided to AIG. GAO-09-490T. Washington, D.C.: March 18, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-474T. Washington, D.C.: March, 11, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-417T. Washington, D.C.: February 24, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-359T. Washington, D.C.: February 5, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-296. Washington, D.C.: January 30, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 22, 2009. Troubled Asset Relief Program: Additional Actions Needed to Better Ensure Integrity, Accountability, and Transparency. GAO-09-266T. Washington, D.C.: December 10, 2008. Auto Industry: A Framework for Considering Federal Financial Assistance. GAO-09-247T. Washington, D.C.: December, 5, 2008. Auto Industry: A Framework for Considering Federal Financial Assistance. GAO-09-242T. Washington, D.C.: December 4, 2008. Troubled Asset Relief Program: Status of Efforts to Address Defaults and Foreclosures on Home Mortgages. GAO-09-231T. Washington, D.C.: December 4, 2008. Troubled Asset Relief Program: Additional Actions Needed to Better Ensure Integrity, Accountability, and Transparency. GAO-09-161. Washington, D.C.: December 2, 2008. Guidelines for Rescuing Large Failing Firms and Municipalities. GAO/GGD-84-34. Washington, D.C.: March 29, 1984.
The recent financial crisis resulted in a wide-ranging federal response that included providing extraordinary assistance to several major corporations. As a result of actions under the Troubled Asset Relief Program (TARP) and others, the government was a shareholder in the American International Group Inc. (AIG); Bank of America; Citigroup, Inc. (Citigroup); Chrysler Group LLC (Chrysler); General Motors Company (GM); Ally Financial/GMAC, Inc. (GMAC); and Fannie Mae and Freddie Mac (Enterprises). The government ownership interest in these companies resulted from financial assistance that was aimed at stabilizing the financial markets, housing finance, or specific market segments. This report (1) describes the government's ownership interest and evaluates the extent of government involvement in these companies, (2) discusses the government's management and monitoring of its investments and exit strategies, and (3) identifies lessons learned from the federal actions. This work was done in part with the Special Inspector General for the Troubled Asset Relief Program (SIGTARP) and involved reviewing relevant documentation related to these companies and the federal assistance provided. GAO interviewed officials at Treasury, Federal Reserve, Federal Housing Finance Agency (FHFA), and the banking regulators, as well as the senior executives and relevant officials at the companies that received exceptional assistance. The extent of government equity interest in companies receiving exceptional assistance varied and ranged from owning preferred shares with no voting rights except in limited circumstances (Bank of America until it repurchased its shares in 2009) to owning common shares with voting rights (Chrysler, Citigroup, GM, and GMAC) to acting as a conservator (the Enterprises). In each case, the government required changes to the companies' corporate governance structures and executive compensation. For example, of the 92 directors currently serving on boards of these companies, 73 were elected since November 2008. Many of these new directors were nominated by their respective boards, while others were designated by the government and other significant shareholders as a result of their common share ownership. The level of involvement in the companies varied depending on whether the government served as an investor, creditor, or conservator. For example, as an investor in Bank of America, Citigroup, and GMAC, the Department of the Treasury (Treasury) had minimal or no involvement in their activities. As both an investor in and a creditor of AIG, Chrysler, and GM--as a condition of receiving assistance--the government has required some combination of the restructuring of their companies, the submission of periodic financial reports, and greater interaction with company personnel. FHFA--using its broad authority as a conservator--has instituted a number of requirements and practices that involve them in the Enterprises.
Because the SSN is unique for every individual, both the public and private sectors increasingly use it as a universal identifier. It is particularly useful to government for data matching or identity verification to ensure that individuals are eligible for program benefits or services. Though the use of SSNs by government entities is often mandated or authorized by law, there is no one law that regulates the overall use of SSNs by all levels and branches of government, and state laws pertaining to SSN display vary in terms of the restrictions they impose on both use and disclosure. Due to the pervasive use of SSNs, individuals are routinely asked to disclose their SSNs along with other personal, identifying information, for numerous purposes. In some instances where individuals provide their SSNs to government entities, documents containing the SSN are routinely made available to the public for inspection. Generally, the overall use and disclosure of SSNs by the federal government is restricted under the Privacy Act, which, broadly speaking, seeks to balance the government’s need to maintain information about individuals with the rights of individuals to be protected against unwarranted invasions of their privacy. Section 7 of Public Law 93-579 requires that any federal, state, or local government agency, when requesting an SSN from an individual, tell individuals whether disclosing their SSN is mandatory or voluntary, cite the statutory or other authority under which the request is being made, and state what uses it will make of the individual’s SSN. Based on a survey for prior work, we reported that while nearly all government entities we surveyed collect and use SSNs for a variety of reasons, many of these entities reported they do not provide individuals the information required under this statute when requesting their SSNs. Further, it is unclear who has responsibility for overseeing these requirements placed on state and local governments. The growth in the use of SSNs is important to individual SSN holders because these numbers, along with names and birth dates, are among the three personal identifiers most often sought by identity thieves, and recent statistics indicate that the incidence of identity theft is growing. The widespread disclosure of SSNs in public records has raised concern because it can put individuals at increased risk for identity theft. The passage of identity theft legislation by the federal government and state governments indicates that this type of crime is widely recognized as a serious problem. Some of these laws include limits or restrictions on the display of SSNs, and government agencies have taken other measures to restrict the exposure of SSNs as well. For instance, most states have modified their policies on placing SSNs on state drivers’ licenses, some prohibit the use of SSNs as a student identification number, and others have removed SSNs from checks and benefit statements. These actions suggest that the risk of SSN exposure is also widely recognized. As governments move increasingly to electronic record keeping, access to documents that include Social Security numbers has become easier. To the extent that such records are available through the Internet, ease of access can increase exponentially. The growth of electronic record keeping has, in fact, made it easier for some agencies to provide or even sell their data in bulk. In our previous research, a few government agencies and courts reported to us that they were also considering the prospect of expanding the volume and type of public records that would be available on their Web sites. Currently, congressional lawmakers are considering legislation to curtail such exposures, looking at both public and private sector uses. Not all records held by government or public agents are “public” in terms of their availability to any inquiring person. Governments hold many records that are not generally available and other records that are protected from public access. For example, adoption records are generally sealed. Personnel records are often not readily available to the public, although newspapers may publish the salaries of high, elected officials. Furthermore, governments may release portions of records but protect sensitive information within them, which includes Social Security numbers. Treatment of similar types of records varies among federal, state, and local governments. Furthermore, there is no common definition of public records. For this report, however, we refer to public records as those records generally made available to the public in their entirety for inspection by a federal, state, or local government agency. Such documents are typically accessed in a public reading room, clerk’s office, or on the Internet. In our previous review of government use of SSNs, we found that they are prevalent in records held by government agencies, some of which were available to the public. We also learned that courts at all three levels of government maintain public records containing SSNs, such as divorce decrees, child support orders, and bankruptcy cases. At that time, some officials who maintained these records told us their primary responsibility was to preserve the integrity of these records rather than protect the privacy of the individual SSN holder. However, we also found that some agencies were trying to better safeguard the SSN by using innovative approaches, such as by modifying their processes or their forms. Some agencies and courts also reported limiting the practice of placing public records containing SSNs on Web sites. The most far-reaching efforts we identified took place in states that had established statewide policies and procedures. However, this research focused, in general, on how and why governments use SSNs, and did not examine in detail, the extent to which SSNs are exposed to the general public and potentially available for misuse. We found that SSNs are widely exposed to public view in state and local records, but less so in federally held records. Specifically, 41 states and the District of Columbia reported displaying SSNs in public records, and we estimate that over three-quarters of counties do so as well. The number and type of records in which SSNs are displayed varies greatly for both states and counties, but they are most often found in court records and local property records. For federal executive branch agencies, records are covered by the Privacy Act of 1974, which generally prohibits disclosure of SSNs and other personal information. However, we found that SSNs are available in some federal court records. According to our survey, most states maintain at least one type of public record that reveals individual SSNs. Agencies in 41 states as well as the District of Columbia reported holding at least one type of public record that shows the SSN. This may understate their exposure, however, given that we received responses from only 62 percent of the state agencies we surveyed. Nevertheless, we received responses from agencies in every state. The number of these reported records ranged from 1 to 9 for most states, but was much higher for a few states. (See table 1.) (Appendix III provides a complete list of state and state function response rates). Among the types of records reported by the 338 state agencies responding, we found no one public record in which the SSN was always displayed. The most frequently cited, however, were those held by state courts. These ranged from criminal proceedings and litigation and civil case files to traffic records and records of judgments. (See fig. 1.) Appendix IV shows how state agencies responded to our question on whether SSNs are displayed in public records. We estimate that individuals’ SSNs are displayed in some public records in 80 to 94 percent of U.S. counties. While not everyone would be identified in such records, we estimate that this exposure could potentially affect any one of the estimated 91 to 97 percent of the U.S. population that lives in these counties. According to our survey, the county records in which SSNs are most often revealed are those held by recorders and court clerks. (For a description of the types of records held by each program area see app. V.) Specifically, we estimate that 58 to 68 percent of records held by courts and 50 to 68 percent held by recording offices contain publicly accessible SSNs, while only 5 to 9 percent of records held by all other functions included in our survey contain SSNs. Figure 2 shows our estimates of the percent of records that display SSNs to the public held by different types of local government offices. Courts and recording offices hold a substantial variety of records. For example, courts hold criminal proceedings, civil cases files, and traffic records, among others. Recorders also maintain a variety of records, including many concerning property ownership. We estimate that 80 to 94 percent of U.S. counties have publicly available court or property ownership records with SSNs. Specifically, SSNs are available to the public in records of criminal proceedings in 44 to 64 percent of counties and in divorce records in 12 to 26 percent of counties—two kinds of records that are often held by courts. SSNs are also displayed in some other types of records—including, military discharge records and Uniform Commercial Code (UCC) filings. Figure 3 shows the percent of counties in which specific types of records display SSNs to the public. While we know of no particular type of record relevant to all individuals in a given county, there are some records that may pertain to a large number of residents. These would be records that document activities or transactions many people engage in—for instance, home mortgage records or voter registrations. To illustrate the potential for an individual’s SSN to be identified in a public record, we have estimated the percent of the population that lives in counties in which SSNs are available in certain types of records. For instance, we estimate that 27 to 41 percent of the population lives in counties in which SSNs are available to the public in traffic records. Further, from 49 to 63 percent of the population lives in counties in which SSNs are available in mortgage and real property transfer records, and from 16 to 28 percent lives in counties in which SSNs are available in divorce records. While the federal government compiles a wide range of information on individuals that often includes SSNs, the Privacy Act of 1974 may prohibit their disclosure, along with other personal information, without the consent of the individual. Specifically, SSNs in record systems—specific groups of records—held by federal executive agencies are generally not available to the public. SSNs that are not held in a system of records, however, may not be governed by any other law or regulation and, therefore, may be exposed on a limited basis. However, there was no way to ascertain where they might be or the degree of any public access. Nevertheless, officials we interviewed in 5 selected agencies reported that they were not aware of any records maintained by their agencies—in a system or otherwise—that display SSNs to the public. Federal court records are generally available to the public, but the courts have supervisory power to withhold certain information from public examination. Statutes also require records disclosure in some instances. Social Security numbers are sometimes found in these records. According to officials of the Administrative Office of the U.S. Courts, SSNs are available in certain types of records because the SSNs are required by law. Occasionally, but not routinely, SSNs are available in other files throughout the federal court system when attorneys of record include this information in documents filed with the court. In the last 2 years, however, the Judicial Conference of the United States, which establishes policy for the federal judiciary, revised federal policy in bankruptcy cases so that only the last four digits of any SSN will be visible to the general public in new record entries. State and local government respondents to our survey reported frequently using SSNs in public records to verify identities or to meet state legal requirements, but some agencies said they had no need for them. For state agencies, the second most frequently cited purpose for using SSNs was to match information in other records or databases. Among local government agencies the second most frequently cited reason was to comply with state law or regulations. However, some state and local offices reported that they had no specific use for the SSNs in certain records, although they were often contained in documents submitted to their offices. The federal courts reported routinely collecting SSNs to ascertain the identities and holdings of debtors involved in bankruptcy filings. Courts also reported routinely collecting SSNs for Social Security claims cases. Identity verification was the most frequent reason given by our state survey respondents for collecting or using SSNs that are shown in public records. Fifty-four state agencies representing 30 states cited this reason. Ten agencies, representing 8 states, indicated there was no specific reason for collecting or using the SSN in a particular type of public record. Table 2 shows the reasons cited for collecting or using SSNs in public records and the number of state agencies that gave each reason. Other reasons state agencies gave were primarily to collect fines, fees, or judgments, and because the SSNs were already included in documents that became part of the public records. Some of these respondents gave specific examples of how the SSN is used for identify verification or benefits determination. Three respondents noted that SSNs are used to facilitate investigations of wrongdoing. SSNs in local government public records are most often collected or used for identity verification. After identity verification, other common reasons include complying with state laws and regulations and matching with other records. Additionally, quite a few offices maintain public records that contain SSNs for which they have no use. Figure 4 shows reasons that SSNs are in local public records and our estimates for the percent of such records that contain SSNS for each purpose. Reasons for collecting SSNs in public records varied across different functional areas at the county level. Among local courts, identity verification is the single most common use of SSNs in public records, while recording offices collect or use SSNs in public records to comply with state laws or regulations more often than any other specific purpose. Among all other functions, SSNs are most frequently used for identity verification, matching with other records, and complying with state laws or regulations. Recorders also frequently maintain records that contain SSNs for which they have no use. On the basis of our survey, we estimate that 25 to 51 percent of records with SSNs held by recording offices contain the number for no specific purpose. In addition, the survey responses of some recording officials indicated that SSNs are not required or requested by their offices, but they may appear in records filed in their offices nonetheless. Some respondents indicated that their job is to maintain the record only—they cannot amend or change it. Table 3 is a sample of responses from those who checked the “other” option to the question asking why SSNs are collected or used in specific types of public records. (For the complete survey question and response categories, see appendix II.) Federal agencies engage in a wide range of required or permitted uses of SSNs, such as electronic matching of information in databases. Various federal laws and regulations require or permit federal agencies to collect and use SSNs when administering federal programs. However, as noted above, due to Privacy Act provisions, in most cases these SSNs are not available to the public. Concerning the federal judiciary, SSNs are generally required for bankruptcy and Social Security claims cases. SSNs are generally not required with regard to other civil or criminal cases. Storage methods and forms of public access to records with SSNs vary somewhat among the different levels of government, but hard copy is the most common form of access for the public, and some agencies have begun to reduce SSN exposure. State government offices tend to store such records electronically, while most such local government records are stored on microfiche or microfilm. However, for both these levels of government, inspection of paper copies is the most commonly available method for public access to such records. Few state agencies make them available on the Internet. In counties, however, we estimate that offices in as many as 15 to 28 percent—several hundred—do so. For the future, however, few state or local offices reported any plans to place additional records on the Internet. Some state and local offices reported that in recent years they had begun to restrict access to SSNs in public records. At the federal level, the National Archives stores many federal agency documents, with access restricted by law. For its part, the federal court system has recently taken action to restrict access to SSNs in its public bankruptcy records, including those on the Internet. State agencies responding to our survey generally store public records that display SSNs electronically. Electronic databases or indexes was the most frequently cited storage method. Microfiche or microfilm and computer usable media such as DVDs, CD-ROMs, diskettes, or tapes were also frequently cited. Figure 5 shows the number of state agencies using each storage method we asked about in our survey. Many state offices also use more than one method to store the same type of records. This pattern could be due to retention of older records in noncomputerized formats such as microfiche/film and placement of newer records into computerized formats, rather than having a single record available in multiple formats. Table 4 shows the extent of this practice. While state agencies generally store most public records displaying SSNs electronically, in most cases, members of the public are not able to access the records electronically. Despite reliance on electronic storage methods, based on our survey responses, walk-in inspection of paper copies and mail requests are the most commonly available forms of access to state public records containing SSNs. Fifty-four state agencies—out of 90 where public records display SSNs—reported these were the only way for the public to access such records. Comparatively few state agencies responding to our survey provided Internet access to records containing SSNs. In most offices where it is available, a fee or user registration is required. Of those offices that make records available on the Internet, more reported making court records and UCC filings available than any other type of record. Figure 6 compares the extent of Internet access with walk-in inspections and on site electronic databases. Based on their responses to our survey, state agencies are not planning to significantly expand Internet access to public records that show SSNs. Only 4 state agencies indicated plans to make such records available on the Internet, and one agency plans to remove them from Internet access. Three agencies plan changes in Internet access. One said that a state law, with limited exceptions, barred the release of SSNs held by any state government entity. Another agency plans to charge a fee for Internet access to records it maintains. The third agency plans to remove SSNs and place records on the Internet Our survey results show that state offices have recently taken some measures to change the way they display or share SSNs in public records. Again, because of the number of nonrespondents, the results may not account for all such measures. As figure 7 shows, state agencies most often have either redacted—covered or otherwise hidden from view— SSNs from public versions of records or restricted access. Specific restrictions and other actions state agencies reported taking included blocking or removing SSNs from electronic versions of records, allowing individuals identified in records to keep their SSNs out of publicly available versions, replacing SSNs with alternative identifiers, and restricting access only to individuals identified in the records. A number of state agencies also described broader policy changes taken by their state governments in the last 2 years. According to survey respondents, at least 8 states have enacted new laws to restrict public disclosure of SSNs. These states, with brief descriptions of the newly enacted laws are listed in table 5. We previously reported that Washington and Minnesota had enacted comprehensive policies to restrict the display of SSNs. Minnesota’s law— the Minnesota Government Data Practices Act—among other provisions specifically classifies SSNs collected by state and local government agencies as nonpublic. Washington’s policy was implemented through an executive order signed by the governor in April 2000. In response, state agencies removed SSNs from forms and documents where their display was found to be not vital to the business of the agency. They also changed the format of certain public records to limit disclosure of SSNs, such as recording SSNs on portions of forms or duplicate forms that are not released to the public. In October 2002, the Conference of Chief Justices and the Conference of State Court Administrators issued a report (CCJ/COSCA Guidelines) for state court systems regarding public access to court records, which recommended that courts take various means to protect SSNs within the court records they maintain. One strategy discussed in the Guidelines would be for courts to have SSNs be available in records only when viewed at a court facility. Such records might be available electronically, but only through workstations in a court facility. The Guidelines also suggest that parties to a case or individuals identified in records be allowed to request additional restrictions for good cause. Finally, the Guidelines advise state court systems not to disclose SSNs protected by any state or federal law. Overall, microfiche or microfilm are the most commonly used methods of storing local government public records that contain SSNs—38 to 52 percent of such records are stored in these manners. To a lesser degree, county offices use DVDs and CD-ROMs, electronic text files, portable data files, and electronic databases. We found no real variation in the storage methods used by different functions. Overall, the predominant methods for gaining access to local government public records containing SSNs are by visiting an office in person to inspect paper copies or requesting a copy through the mail. Records containing SSNs are available in onsite electronic databases in 49 to 68 percent of counties. According to our survey, the Internet is the least common form of access, although records with SSNs are accessible on the Internet in 15 to 28 percent of U.S. counties. We estimate that 34 to 48 percent of the population lives in these counties. Figure 8 shows the estimates for the percent of counties in which different methods can be used to access public records with SSNs. According to our survey, few or no offices other than courts or recording offices currently make records containing SSNs available via the Internet. With regard to future plans, while we estimate that offices in only 2 to 8 percent of counties plan to introduce Internet access to records containing SSNs, this may have consequences for the 13 to 25 percent of the U.S. population that lives in those counties. In the past 2 years, the vast majority of local government offices have not made changes to the way they display or share SSNs in particular records. However, we estimate that offices in 13 to 27 percent of counties have begun redacting SSNs on copies of records provided to the public and offices in 12 to 26 percent of counties have begun restricting access to records containing SSNs. Some offices also have begun using partial SSNs in public records. Federal agencies transfer paper records containing SSNs to records centers operated by the National Archives and Records Administration (NARA) for storage. Access to these records remains under the legal authority of the transferring agencies, which, as noted previously, generally do not make SSNs accessible to the public. According to NARA officials, NARA is considering adding electronic records storage services to the records center program, but the same rules of access would apply to those records. At the end of the period in which the agency records are expected to be needed, those that have continuing value are transferred into the National Archives and NARA’s legal custody. NARA provides public access to archival records unless the records have access restrictions. Some records are subject to restrictions prescribed by statute, executive order, or by restrictions specified in writing in accordance with 44 U.S.C. 2108 by the agency that transferred the records to the National Archives of the United States. Additionally, the Archivist of the United States imposes general restrictions to certain kinds of information or classes of records. NARA has many series of federal agency records in its legal custody that contain SSNs. NARA officials told us that it has two broad categories of archival records with SSNs: (1) name retrievable records and (2) operations records that are not name retrievable. NARA is unable to screen the latter records for SSNs unless the records must be screened for another restriction. In response to requests for name retrievable records made under the Freedom of Information Act, NARA screens the records and redacts SSNs (if the individual is not deceased) prior to public disclosure or release. NARA also masks the SSNs of living individuals in archival databases that NARA makes available to the public on its Internet Web site. We discussed Judicial Conference policies and procedures pertaining to privacy and SSNs with officials from the Administrative Office of the U.S. Courts. They told us that the Conference has taken a number of actions in recent years to increase privacy and reduce Internet access to SSNs that are in federal court documents. These actions include (1) implementing a rule for bankruptcy cases, effective December 1, 2003, that requires SSNs, except for the last four digits, to be redacted from electronically available (Internet) documents accessible to the public; (2) issuing a “model local rule” for criminal cases that eliminates SSNs from documents filed with the court, unless necessary, and includes only the last four digits of SSNs in publicly available paper and electronic versions of such documents; and (3) issuing privacy policy guidance for electronically available court documents in civil cases. Citing its bankruptcy case rule and model local rule for criminal cases, the Conference also advises attorneys for parties filing court documents to include only the last four digits of SSNs. In its report, the committee that developed the policy noted that there should be “consistent, nationwide policies in federal courts in order to ensure that similar privacy protections and access presumptions apply regardless of which federal court is the custodian of a particular case file.” Although they are not displayed in public records en masse, we found that millions of SSNs are still subject to exposure on individual identity cards issued under federal auspices. Although some agencies are taking action to address this display of the SSN, we found that, currently, an estimated 42 million Medicare cards display entire nine-digit SSNs, as do some Department of Defense insurance cards and approximately 8 million identification cards, as well as 7 million Department of Veterans Affairs (VA) beneficiary cards. In addition, approximately 830,000 federal employees carry health insurance cards issued through Federal Employees Health Benefits Program that display the bearer’s full SSN. Such cards are used at the point of service – at pharmacies, medical offices, or merely to enter buildings – and are usually carried almost everywhere, circulating through many hands and even photocopiers, increasing opportunities for the card and the SSN on it to be stolen, copied, or even lost. Three of four federal agencies have begun taking action to remove SSNs from such health insurance or identification cards issued under their auspices. In 2003, the federal Office of Personnel Management, which handles personnel for executive branch agencies, directed all health insurance carriers affiliated with the Federal Employee Health Benefit Program to eliminate SSNs from insurance cards as soon as it is operationally and financially practical. As of August 2004, about 57 percent of these insurers were using numbers others than SSNs on their cards— this represents over 79 percent of the subscribers in the FEHBP. The office projected that by the end of 2005 only 3.7 percent of FEHBP subscribers will have their SSNs on their health insurance cards, as more insurance companies have signaled their intention to replace the SSN. Meanwhile, VA is eliminating SSNs from 7 million VA identification cards and is replacing cards with SSNs or issuing new cards without SSNs from 2004 through 2009, until all such cards have been replaced. In 2004, the Department of Defense (DoD) began replacing approximately 6 million health insurance cards that display SSNs with cards that do not display the bearer’s SSN, but continues to include SSNs on approximately 8 million military identification cards. In addition, the Centers for Medicare and Medicaid Services (CMS), with the largest number of cards displaying the entire nine-digit SSN, does not plan to remove the SSN from Medicare identification cards. During 2003, CMS’s Office of Financial Management, Program Integrity Group conducted a study of identity theft issues, during which they considered the possibility of removing SSNs from Medicare cards. CMS officials who served on the workgroup told us that the group had concluded that eliminating SSNs from 40 million Medicare cards would be cost prohibitive. The Social Security Administration (SSA), which is the issuer of Social Security cards (displaying the bearer’s SSN), addresses this vulnerability differently than the other agencies. SSA recommends that cards be kept in a safe place and that a person “not carry” it with them unless it is needed “to show it to an employer or service provider.” In contrast, CMS instructs Medicare participants to show the card whenever medical care is provided, and to carry the card when traveling, but to keep the number as safe as they would a credit card number. While the Social Security Administration advises individuals to avoid exposure of their SSN card, we found no federal policy regarding its display on identity and insurance cards. Specifically, there is no presidential executive order, federal law, or common federal policy in effect. Although SSA has authority to issue policies and procedures over the Social Security cards that it issues, it does not have authority over how other federal agencies use and display SSNs. While the Office of Management and Budget has issued guidance for managing federal information resources and protecting records on individuals, it has not provided guidance for the display of SSNs on cards. Rather, the Centers for Medicare and Medicaid Services, the Office of Personnel Management, VA, and DoD each have their own policies for the cards issued under their authority. Today, the SSN has become a universal identifier; as such, it offers government as well as the private sector an efficient way to verify the identity and the qualifications of people for programs or activities ranging from taxes to health benefits to worker’s compensation payments. As a single, unique number assigned to one person, the SSN also allows for tracking that individual through more than one database and comparing information for a variety of uses, including police work. In short, SSNs are a lynchpin to other personal information held in a variety of records. The extent to which they are exposed to public view, of course, increases the likelihood that they will be misused for inappropriate mining of personal information, violation of privacy, and identity theft. The increased use of SSNs in both the public and private sectors means that SSNs are more widely circulated and more likely to appear in public records that document common life events and transactions, such as marriages and home purchases. The continued visibility of SSNs in public records in virtually every corner of the country presents continued risk of widespread, albeit small-scale, identity theft. Since the public usually obtains such records in individual hard copies, the risk of SSN theft in large volume from public records may be small. Indeed, a variety of government agencies and oversight bodies appear to be taking steps to eliminate the open display of SSNs, but there is no uniform practice or policy at the federal, state, or local level to protect them. Such initiatives to protect the SSN may slow its misuse, but the absence of uniform and comprehensive policy is likely to leave many individuals vulnerable. For example, the 15 to 28 percent of counties that we estimate post some records with SSNs on the Internet creates a broad vulnerability that, together with the lack of uniform protections, makes it difficult for any one individual to mitigate. In one of our previous reports, we recommended that a representative group of federal, state, and local officials develop a unified approach to safeguarding SSNs used in all levels of government and particularly those displayed in public records. We still believe such an approach would be constructive. On another front, there is jeopardy in the display of SSNs on identity and eligibility cards issued under government auspices. The cardholder is usually required to use his or her card at the point of service—which means a practical need to carry and display it often—thus increasing the likelihood for accidental loss, theft, or visual exposure. The risk this poses has been both recognized and addressed by, among other things, prohibitions against using SSNs as student identification numbers and policy changes concerning use of SSNs on drivers’ licenses. While we did not examine the phenomenon of SSN display on identification cards across all federal programs, it is clear that the lack of a broad, uniform policy allows for inconsistent, but persistent exposure. To address the overall problem of exposure for the SSN, congressional lawmakers are considering legislation that would, among other things, curtail display of SSNs in both the private and public sectors. For its part, the public sector has already demonstrated that it is possible, for example, to substitute another number for the SSN on displayable cards, and link it to actual SSNs in a protected database. Given the size of federal programs, the lack of such safeguards across all agencies currently leaves millions of people unprotected. To address this potential vulnerability, we recommend that the Director, Office of Management and Budget, identify all those federal activities that require or engage in the display of nine-digit SSNs on health insurance, identification, or any other cards issued to federal government personnel or program beneficiaries, and devise a governmentwide policy to ensure a consistent approach to this type of display. We provided a draft of this report to the Administrative Office of the United States Courts, the Centers for Medicare and Medicaid Services, the Department of Defense, the National Archives and Records Administration, the Office of Management and Budget, the Office of Personnel Management, the Social Security Administration, and the Department of Veterans Affairs for comment. Officials at each agency confirmed that they had reviewed the draft and generally agreed with its findings and recommendation. Officials from the Administrative Office of the United States Courts, the Department of Defense, and the Office of Personnel Management provided us with technical comments, which we have incorporated into the report as appropriate. We received formal comments from officials from the Department of Defense, the Office of Personnel Management, and the Social Security Administration; those comments are included in appendixes VI through VIII. We did not receive formal written comments for this report from the Centers for Medicare and Medicaid Services, the National Archives and Records Administration, or the Department of Veterans Affairs, though officials from each confirmed that they generally agreed. Additionally, OMB did not provide formal written comments, but officials from OMB’s Office of General Counsel and Office of Information and Regulatory Affairs confirmed that they generally agreed with the report and that they would take our recommendation into consideration for future action. We are sending copies of this report to the Director of the Administrative Office of the U.S. Courts, the Administrator of the Centers for Medicare and Medicaid Services, the Secretary of Defense, the Archivist of the National Archives and Records Administration, the Director of the Office of Management and Budget, the Director of the Office of Personnel Management, the Commissioner of the Social Security Administration, and the Secretary of Veterans Affairs, and appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov/. If you have any questions concerning this report, please contact Barbara Bovbjerg at (202) 512-7215. See appendix IX for other contacts and staff acknowledgments. To complete the objectives for this assignment, we collected original information at the federal, state, and local government levels. We reviewed previous GAO reports on Social Security numbers (SSN), identity theft, data mining, information security, record linkage and privacy, and related topics. In addition we reveiwed various studies of state SSN use, privacy laws such as the Information Technology Management Reform Act of 1996 (Clinger-Cohen Act), the Privacy Act of 1974, the Freedom of Information Act, the Family Educational Rights and Privacy Act of 1974, Health Insurance Portability and Accountability Act of 1996, the Computer Matching and Privacy Protection Act of 1988, the Identity Theft and Assumption Deterrence Act of 1998, several memoranda from the Office of Management and Budget (OMB), and other related documents. We used different methods for selecting units of study and for collecting data for each of the three levels of government reviewed for this engagement. The methods used in the three studies are detailed in the following sections. We conducted personal interviews with officials of 10 federal agencies: the Administrative Office of the United States Courts, the Centers for Medicare and Medicaid Services, the Department of Defense, the National Archives and Records Administration, the Office of Management and Budget, the Office of Personnel Management, the President’s Council on Integrity and Efficiency, the Social Security Administration, the Department of Agriculture, and the Department of Veterans Affairs. We selected these agencies because they have responsibility for implementing the Privacy Act, the Freedom of Information Act, or programs that are generally known to include SSNs. We obtained information from six federal agencies using an interview guide based on federal privacy laws and OMB’s general and/or agency- specific disclosure criteria. The interview guide also addressed each agency’s own privacy regulations and practices. Additional queries dovetailed with questions asked at the state and local levels: public access to the agencies’ public records that contain SSNs, availability of these records on the Internet, privacy and protecting records about individuals and other issues unique to the agency. The interview guide has face validity and was used as a talking point instrument. To gather information about public records that contain SSNs and whether they are made available to the public at the state and local levels, we surveyed state and local government officials in program areas that were determined likely to maintain or collect public records. We developed a list of questions to be used for both surveys, although the state survey was Web-based and the county survey was administered by mail. The survey was designed to address questions about multiple public records while allowing for analysis on specific types of records. To this end, we developed a standardized two-page questionnaire, which could be replicated for various types of public records and could be used at both the state and the county level. In addition, the survey design allowed for specific groupings of different types of records to be sent to different respondents. For instance, court clerks were sent forms concerning types of records different from those sent to public health officials. The questionnaire include such items as accessibility of records displaying SSNs to the public; reasons for collecting or using SSNs in the record; formats for storing records with SSNs; plans for changing those formats; methods by which the public can access or view records with SSNs, including Internet usage and changes the office has made in the past 2 years in the way records with SSNs are displayed to the public. We selected 35 different types of public records for our review. We developed the list based on research we conducted on public records, the results of a prior GAO survey concerning government use of SSNs, expert reviewers, and pretesting of the survey instrument. These records can be grouped in the following major categories: court records (e.g., records of criminal proceedings, child support/child custody, divorce, etc); law enforcement records (e.g., criminal arrest warrants, prison records); motor vehicle records; lien and security interest records; vital records (birth, death, and marriage); health records (immunizations and communicable diseases); and “other” category which includes professional licensing, military discharge records, and social service records among others. Appendix V includes a full list of the 35 record types. For the development of both the list of records types and the survey questions, we obtained assistance from both internal and external expert reviewers. Data gathering for the state level consisted of a Web-based survey to state officials in previously identified program areas or functions in all 50 states and the District of Columbia. These program areas, identified in our previous work on government use of SSNs as likely to maintain public records containing SSNs, included: (1) courts/judiciary, (2) law enforcement, (3) human services, (4) health and vital statistics, (5) selected professional licensing offices, (6) labor, (7) corrections, and (8) public safety. Several steps were involved in the preparation of and conduct of the survey. We obtained the names and e-mail addresses of state officials in these offices from a list used for our earlier work and updated names, telephone numbers, and Web addresses using the “Yellow Book Leadership Directories” provided in our Intranet. Two contractors made telephone calls to verify the titles and names and to obtain the e-mail address of each of the entries. We sent a notification e-mail to each of the department heads of the selected state agencies informing them about the survey. Within 2 weeks of the notification, we sent an activation e-mail with the instructions on how to access and complete the survey, including a survey link, and a unique username and password to each selected state official. To increase the response rate, we sent a reminder e-mail after the activation email to those department heads that had not responded; about 2 weeks later, we sent a second e-mail to nonrespondents. Contractors followed-up nonrespondents with telephone calls about 2 weeks after the second reminder e-mail. The survey was available on line from February 23, 2004, to July 31, 2004. We surveyed 542 officials in the 50 states and the District of Columbia. We received responses from 338 state agencies out of 542. In 12 states, we received responses from half or fewer of the agencies surveyed. For three of the program areas, we received responses from about half or fewer of the states with agencies serving those functions. The response rates for three program areas that we believe are most likely to maintain records with information about large proportions of state residents (though not necessarily a majority of residents) are: Courts/Judiciary–61 percent; human services–69 percent; health services and vital statistics–60 percent. These compare with our overall response rate of 62 percent. We administered a questionnaire by mail to the study population of local government officials in a two-stage probability sample designed to reflect the population of U.S. counties. This allowed us to estimate to the entire population of U.S. counties and to develop population-based estimates. The program areas were: (1) courts, (2) law enforcement, (3) public health, (4) recorders, (5) social services, (6) tax assessment, and (7) voter registration. We developed a list of contact information with addresses for local government officials using address lists obtained from the National Association of Counties (NACO) and BRB Publications, Inc., as well as the Yellowbook leadership directory and information obtained from state and local government websites. Address collection was done between November 2003 and February 2004. To identify incumbent government officials in a sample of counties, we used address lists of county government officials purchased from BRB Publications and NACO and supplemented them with the name and address list used in our prior work and by searching Web sites and making telephone calls. We sent out packets of questionnaires to 1996 officials in 200 counties (including minor civil divisions in 24 counties). The number or type of records mailed to each official depended on the research team’s judgment of the type of records each official would be likely to maintain given his or her function. For example, we sent immunization and communicable disease questionnaires to public health directors. We provided for an “Other” type of record that respondents could fill out for records they maintained and for which we did not supply a questionnaire. A few weeks after the initial survey was mailed, we sent a reminder letter without replacement questionnaires. A week after the first reminder letter, we began sending a second wave of reminder letters with replacement questionnaires. While we received a completed survey from at least one program area in each county, the response rates within each county varied from 1 program area in a county to all program areas. The overall response rate for the county survey was 81 percent. Response rates for each program area were courts (79 percent), health departments (79 percent), law enforcement (80 percent), recording officials (88 percent), social services (79 percent), tax assessors (85 percent), and voter registrars (86 percent). We developed a U.S. county sample that would reflect the U.S. population and allow making estimates related to persons. The probability sample of 200 counties included 35 counties with the largest populations with certainty and the remaining counties selected using probability proportional to size of the population. The measure of size used for sampling was the square root of the 2002 estimated population; use of the square root function dampens some of the variability in county-level population sizes (from ~100 to ~ 10,000,000). To avoid having a sparsely populated county having large influence on population-related estimates, two counties with 2002 estimated populations less than 150 are combined with neighboring counties, using geographic information from the U.S. Bureau of the Census’s TIGER system; this is the one situation where sampling units are not individual counties. We determined that for the sampled counties in some states (24 total: Connecticut (2), Maine (2), Massachusetts (2), Michigan (9), New Hampshire (1), New Jersey (3), Wisconsin (5)), it was appropriate to survey local government officials within subcounty units called “Minor Civil Divisions” (MCD) because either county governments do not exist (e.g., Connecticut and Massachusetts) or the MCD governments perform many functions that usually are performed by county governments in other areas. This second stage probability sample consisted of 59 MCDs from the specified set of counties, consisting of the 11 MCDs that have 2002 estimated populations of 100,000 or more, and 48 MCDs (2 from each of the 24 counties), selected using probability proportional to the size of the population. During the implementation of the survey, we identified respondents who were not in the target population. Table 6 provides a summary of the reasons for these out of sample respondents. In implementing our survey, we also had many respondents who completed surveys on behalf of several offices. For instance, in some counties and states where courts are centrally administered, we received just one completed survey on behalf of all courts in a county or in a state. In addition, in some states certain functions, such as social services, are state government programs, but are administered by local or regional offices. In some of these cases, we received one completed survey from a state official on behalf of some or all local or regional program offices. The results of the county survey were weighted to make the results generalizable to the entire population of U.S. counties. For each stratum, we formed estimates by weighting the data by the reciprocal of the selection probability. The margins of error for the county survey results varied because we made estimates for different subpopulations. For all the estimates shown here, we are 95 percent confident that when sampling error is considered, the results are within +/- 7 percentage points unless otherwise indicated. All population estimates based on the survey of local government officials are for the target population defined as local government officials in 7 program areas and courts within a sample of 200 U.S. counties. The survey of local government officials is subject to sampling error. There was no sampling error for the census survey of state officials. The effects of sampling errors, due to the selection of a sample from a larger population, can be expressed as confidence intervals based on statistical theory. Sampling errors occur because we use a sample to draw conclusions about a larger population. As a result, the sample was only one of a large number of samples of counties that might have been drawn. If different samples had been taken, the results might have been different. To recognize the possibility that other samples might have yielded other results, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. The 95 percent confidence intervals are expected to include the actual results for 95 percent of samples of this type. We calculated confidence intervals for this sample using methods that are appropriate for the sample design used. For local government survey estimates in this report, we are 95 percent confident that when sampling error is considered, the results we obtained are within +/- 7 percentage points of what we would have obtained if we had surveyed officials in all of the entire study population, unless otherwise noted. In addition to sampling error, other potential sources of errors associated with surveys, such as question misinterpretation, may be present. Nonresponse may also be a source of nonsampling error. We took several steps to reduce these other sources of error. We conducted pretests of the questionnaire using a paper version with 11 incumbents of the functions/roles selected for our review in four different counties in three states to account for differing local government structures and to ensure the questionnaire: (1) was clear and unambiguous, (2) did not place undue burden on individuals completing it, and (3) was independent and unbiased. In addition, we conducted seven pretests with state officials in 2 states. The first six were conducted using a paper version of the questionnaire and the final test was conducted using the Web version to test for functionality. After pre-testing we modified the questionnaire as needed for clarity, respondent comprehension, and objectivity. Most of the items in the questionnaire were closed-ended but provided for multiple responses to each question as well as possibility for respondent comments regarding public access to records with SSNs. We performed our work in Washington, D.C., from March 2003 through September 2004 in accordance with generally accepted government auditing standards. The overall response rate for the state survey was 62 percent. We received responses from at least one agency in every state. In 37 states and D.C., we received responses from more than half of the agencies surveyed. With regard to the specific functions we surveyed, we received responses from about half or fewer of the states in three functions: law enforcement, cosmetology licensing, and notary public licensing. For 8 of the 12 functions, we received responses from 60 percent or more of the states. Because of variation in local governments, each type of record is not always maintained by the same type of office across all counties. Furthermore, program areas that are separate in some counties are combined in others. For instance, in many counties the clerk or recorder performs functions such as voter registration or tax assessment, which operate independently in other counties. Due to these variations, it is not always possible to pinpoint which office within a county maintains a particular type of record. Unless we had specific information indicating what records are maintained in a specific office within a specific county, in administering our survey to local government officials, we sent the same set of forms (each with the same set of questions, but for a specific type of record) to officials within each program area or function. For example, we sent immunization and communicable disease questionnaires to public health directors. We also provided an “Other” record form that all respondents could fill out for records they maintained and for which we did not supply a questionnaire. Table 9 shows the types of forms that were sent to each program area. Due to the variations mentioned above, we received responses about some types of records from more than one program area. Survey responses concerning the following types of records were most often returned by recording officials, but not infrequently by courts: mortgage and real property transfer records, property ownership records, records of property liens and other judgments, Uniform Commercial Code (UCC) filings, divorce records, marriage licenses or applications, hunting and fishing licenses, military discharge and induction records, business or professional licenses, public utility usage records, notary commissions, and vehicle or vessel registrations. The following team members contributed to all aspects of this report: Dennis Gehley, Ron La Due Lake, Joel Marus, Nila Garces Osorio, and John Trubey. In addition, Margaret Armen, Susan Bernstein, Carolyn Boyce, Richard Burkard, Stefanie Bzdusek, Melissa Hinton, Catherine Hurley, Chris Moriarity, and Caroline Sallee made contributions to this report.
While the use of Social Security numbers (SSN) can be very beneficial to the public sector, SSNs are also a key piece of information used for committing identity crimes. The widespread use of SSNs by both the public and private sectors and their display in public records have raised concern over how SSNs might be misused and how they should be protected. In light of this concern, GAO was asked to examine (1) the extent to which SSNs are visible in records made available to the public, (2) the reasons for which governments collect SSNs in records that display them to the public, and (3) the formats in which these records are stored and ways that the public gains access to them. As well as looking at public records, GAO also examined the practices of several federal agencies regarding the display of entire nine-digit SSNs on health insurance and other identification cards issued under their authority. Social Security numbers appear in any number of records exposed to public view almost everywhere in the nation, primarily at the state and local levels of government. State agencies in 41 states and the District of Columbia reported visible SSNs in at least one type of record and a few states have them in as many as 10 or more different records. SSNs are most often to be found in state and local court records and in local property ownership records, but they are also scattered throughout a variety of other government records. In general, federal agency display of SSNs in public records is prohibited under the Privacy Act of 1974. While the act does not apply to the federal courts, they have taken action in recent years to prevent public access. With regard to the SSNs maintained in public records, various state and local officials commonly reported needing them for identity verification. A few, however, said they had no use for the SSN, but that documents submitted to their offices often contained them. States also commonly reported using the SSN to facilitate the matching of information from one record to another. The federal courts largely collect SSNs when required by law to do so; however, due to privacy concerns, SSNs are not in documents that are available electronically to the public. Public records with SSNs are stored in a multiplicity of formats, but public access to them is most often limited to the inspection of individual paper copies on site or via mail by request. Few state agencies make records with SSNs available on the Internet; however, 15 to 28 percent of the nation's 3,141 counties do place them on the Internet and this could affect millions of people. Overall, GAO found that the risk of exposure for SSNs in public records at the state and local levels is highly variable and difficult for any one individual to anticipate or prevent. Another form of SSN exposure results from a government practice that does not involve public records per se. GAO found that SSNs are displayed on cards issued to millions of individuals under the authority of federal agencies for identity purposes and health benefits. This involves approximately 42 million Medicare cards, 8 million Department of Defense identification cards, as well as some insurance cards, and 7 million Veterans Affairs identification cards, which display the full nine-digit SSN. While some of these agencies are taking steps to remove the SSNs, there is no governmentwide federal policy that prohibits their display. Although we did not examine this phenomenon across all federal programs, it is clear that the lack of a broad, uniform policy allows for unnecessary exposure of personal Social Security numbers.
Money laundering, which is the disguising or concealing of illicit income in order to make it appear legitimate, is a problem of international proportions. Federal law enforcement officials estimate that between $100 billion and $300 billion in U.S. currency is laundered each year. Numerous U.S. agencies play a role in combating money laundering. Law enforcement agencies within the Departments of Justice and the Treasury have the greatest involvement in domestic and international money-laundering investigations. FRB and OCC have the primary responsibility for examining and supervising the overseas branches of U.S. banks to ascertain the adequacy of the branches’ anti-money-laundering controls. FinCEN provides governmentwide intelligence and analysis that federal, state, local, and foreign law enforcement agencies can use to aid in the detection, investigation, and prosecution of domestic and international money laundering and other financial crimes. In addition, other U.S. agencies play a role, including the State Department, which provides information on international money laundering through its annual assessment of narcotics and money-laundering problems worldwide. Until recently, U.S. banking regulators’ anti-money-laundering efforts relied heavily on regulations requiring financial institutions to routinely report currency transactions that exceed $10,000, primarily through filing currency transaction reports (CTR) with the IRS. U.S. banking regulators have also relied on approaches in which financial institutions report financial transactions involving known or suspected money laundering.According to a senior Treasury official, U.S. regulators’ anti-money-laundering efforts in coming years are expected to rely more on the reporting of financial transactions involving known or suspected money laundering. U.S. regulators will also be expected to continue relying on CTRs, but to a lesser extent. Most U.S. banks have adopted so-called “know your customer” policies over the past few years to help them improve their identification of financial transactions involving known or suspected money laundering, according to the American Bankers Association. Under these know your customer policies, which are currently voluntary but which the Treasury plans to make mandatory in 1996, financial institutions are to verify the business of a new account holder and report any activity that is inconsistent with that type of business. According to the American Bankers Association, these policies are among the most effective means of combating money laundering, and the majority of banks have already adopted such policies. The seven European countries we visited have tended to model their anti-money-laundering measures after a 1991 European Union (EU)Directive that established requirements for financial institutions similar to those that financial institutions conducting business in the United States must follow. However, instead of relying on the routine reports of currency transactions that the United States has traditionally emphasized, European countries have tended to rely more on suspicious transaction reports and on know your customer policies. These know your customer policies are somewhat more comprehensive than comparable U.S. ones, according to European bank and regulatory officials. While Hungary and Poland have adopted anti-money-laundering measures following the EU Directive, banking and government officials in these two countries told us that the implementation and enforcement of their anti-money-laundering measures have been hindered. They attributed problems to such factors as resource shortages, inexperience in detection and prevention, and in Poland, conflicts between bank secrecy laws and recently adopted anti-money-laundering statutes. FinCEN and INTERPOL have recently initiated Project Eastwash, to attempt to assess money laundering in 20 to 30 countries throughout East and Central Europe and the former Soviet Union. According to FinCEN officials, as of late 1995 on-site visits had been made to five countries to assess the law enforcement, regulatory, legislative, and financial industry environment in each nation. Information from these visits is to be used for policy guidance and resource planning purposes for both the countries assessed and U.S. and international anti-money-laundering organizations, according to these officials. U.S. banks had over 380 overseas branches located in 68 countries as of August 1995. These branches, which are a direct extension of U.S. banks, are subject to host countries’ anti-money-laundering laws rather than U.S. anti-money-laundering laws, according to OCC and FRB officials. In some cases, U.S. banking regulators have not been allowed to perform on-site reviews of these branches’ anti-money-laundering controls. According to U.S. banking regulators, bank privacy and data protection laws in some countries serve to prevent U.S. regulators from examining U.S. bank branches located within their borders. Of the seven European countries we visited, U.S. regulators were not allowed to enter Switzerland and France to examine branches of U.S. banks because of these countries’ strict bank secrecy and data protection laws. U.S. regulators, however, have other means besides on-site examinations for obtaining information on U.S. overseas branches’ anti-money-laundering controls, according to FRB and OCC officials. For example, U.S. regulators can and do exchange information—excluding information requested for law enforcement purposes—with foreign banking regulators on their respective examinations of one another’s foreign-based branches. In addition, FRB can deny a bank’s application to open a branch in a country with strict bank secrecy laws if it does not receive assurance that the branch will have sufficient anti-money-laundering controls in place, according to FRB officials. OCC and FRB officials said that in countries that allow them to examine anti-money-laundering controls at overseas branches of U.S. banks, such examinations are of a much narrower scope than those of branches located in the United States. One reason is that host country anti-money laundering measures may not be as stringent as U.S. anti-money-laundering requirements and, thus, may not provide the necessary information for U.S. examiners. OCC and FRB officials also said that the expense of sending examiners overseas limits the amount of time examiners can spend reviewing the anti-money-laundering controls of the bank. However, according to these officials less time is needed to conduct an anti-money-laundering examination at some overseas branches because of the small volume of currency transactions. FRB officials told us that they have recently developed money-laundering examination procedures to be used by its examiners to address the uniqueness of overseas branches’ operations and to fit within the short time frames of these examinations. Although these procedures have been tested, they have not been implemented and, thus, we have not had the chance to review them. Responsibilities for investigating both domestic and international crimes involving money-laundering are assigned to numerous U.S. law enforcement agencies, including DEA, FBI, IRS, and the Customs Service. While European law enforcement officials acknowledged the important role U.S. law enforcement agencies play in criminal investigations involving money laundering, some commented about the difficulties of dealing with multiple agencies. Some British and Swiss law enforcement officials we spoke with said that too many U.S. agencies are involved in money-laundering inquiries. This overlap makes it difficult, in some money-laundering inquiries, to determine which U.S. agency they should coordinate with. These European officials indicated that designating a single U.S. office to serve as a liaison on these money-laundering cases would improve coordination. According to U.S. law enforcement agency officials, however, designating a single U.S. law enforcement agency as a focal point on overseas money-laundering cases could pose a jurisdictional problem because money-laundering cases are usually part of an overall investigation of another crime, such as drug trafficking or financial fraud. Nevertheless, U.S. law enforcement agencies have taken recent steps to address overseas money-laundering coordination. In particular, a number of U.S. agencies adopted a Memorandum of Understanding (MOU) in July 1994 on how to assign responsibility for international drug money-laundering investigations. Law enforcement officials were optimistic that the MOU, which was signed by representatives of the Secretary of the Treasury, the Attorney General, and the Postmaster General, would improve overseas anti-money-laundering coordination. Although law enforcement is optimistic about improvements in coordination, we have not assessed how well U.S. international investigations are being coordinated. The United States works with other countries through multilateral and bilateral treaties and arrangements to establish global anti-money-laundering policies, enhance cooperation, and facilitate the exchange of information on money-laundering investigations. The United States’ multilateral efforts to establish global anti-money-laundering policies occur mainly through FATF, an organization established at the 1989 economic summit meeting in Paris of major industrialized countries. The United States, through the Treasury Under Secretary for Enforcement, assumed the presidency of FATF in July 1995 for a one-year term. FATF has worked to persuade both member and nonmember countries to institute effective anti-money-laundering measures and controls. In 1990, FATF developed 40 recommendations that describe measures that countries should adopt to control money laundering through financial institutions and improve international cooperation in money-laundering investigations. During 1995, FATF completed its first round of mutual evaluations of its members’ progress on implementing the 40 recommendations. FATF found that most member countries have made satisfactory progress in carrying out the recommendations, especially in the area of establishing money-laundering controls at financial institutions. FATF has also continued to identify global money-laundering trends and techniques, including conducting surveys of Russia’s organized crime and Central and East European countries’ anti-money-laundering efforts. In addition, FATF has expanded its outreach efforts by cooperating with other international organizations, such as the International Monetary Fund, and by attempting to involve nonmember countries in Asia, South America, Russia, and other parts of the world. A more recent multilateral effort involved the United States and other countries in the Western Hemisphere. On December 9-11, 1994, the 34 democratically elected leaders of the Western Hemisphere met at the Summit of the Americas in Miami, Florida. At the summit, the leaders signed a Declaration of Principles that included a commitment to fight drug trafficking and money laundering. The summit documents also included a detailed plan of action to which the leaders affirmed their commitment. One action item called for a working-level conference on money laundering, to be followed by a ministerial conference, to study and agree on a coordinated hemispheric response to combat money laundering. The ministerial conference, held on December 1-2, 1995, at Buenos Aires, Argentina, represented the beginning of a series of actions each country committed to undertake in the legal, regulatory, and law enforcement areas. U.S. Department of Justice officials told us that these actions are designed to establish an effective anti-money-laundering program to combat money laundering on a hemispheric basis. Further, the officials told us that the conference created an awareness that money laundering is not only a law enforcement issue, but also a financial and economic issue, requiring a coordinated interagency approach. As part of another multilateral effort, FinCEN is working with other countries to develop and implement Financial Information Units (FIU) modeled, in large part, on FinCEN operations, according to FinCEN officials. FinCEN has also met with officials from other countries’ FIUs to discuss issues common to FIUs worldwide. The most recent meeting was held in Paris in November 1995, during which issue-specific working groups were created to address common concerns such as use of technology and legal matters on exchanging intelligence information. U.S. Treasury officials said that in recent years, the United States has relied on bilateral agreements to improve cooperation in international investigations, prosecutions, and forfeiture actions involving money laundering. These bilateral agreements, consisting of mutual legal assistance treaties, financial information exchange agreements, and customs mutual assistance agreements with individual countries, also help to facilitate information exchanges on criminal investigations that may involve money laundering. However, the State Department’s 1995 annual report on global narcotics crime concluded that many countries still refuse to share with other governments information about financial transactions that could facilitate global money-laundering investigations. Mr. Chairman, this concludes my prepared statement. I would be pleased to try to answer any questions you or the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed U.S. efforts to combat money laundering abroad. GAO noted that: (1) U.S. bank regulators rely on financial institutions' reporting currency transactions that exceed $10,000, involve known or suspected money laundering, or are inconsistent with the account holder's stated business; (2) European countries focus their anti-laundering efforts less on routine currency transaction reports and more on reports of suspicious activities; (3) host countries' anti-laundering and bank privacy and protection laws, to which overseas branches of U.S. banks must adhere, sometimes hinder U.S. bank regulators' reviews of overseas branches, and examinations of overseas banks tend to be more narrowly scoped; (4) while European law enforcement officials acknowledged the important role of several U.S. law enforcement agencies in anti-laundering activities, they also indicated that it was difficult to determine which U.S. agency they should coordinate efforts with; and (5) the United States works with other countries through multilateral and bilateral treaties and arrangements to establish global anti-laundering policies, enhance cooperation, and facilitate the exchange of information on money-laundering investigations.
Corrosion, if left unchecked, can degrade the readiness and safety of equipment and has been estimated to cost DOD billions of dollars annually. Using fiscal year 2006 data, DOD noted that it spends approximately $80 billion each year to maintain its ships, aircraft, strategic missiles, and ground combat and tactical vehicles. Corrosion-related costs of equipment maintenance were estimated to total $19.4 billion each year, or 24 percent of the total cost of maintenance. In addition, DOD spends approximately $10 billion to maintain about 577,000 buildings and structures at more than 5,300 sites worldwide. Approximately $1.9 billion, or 11.7 percent, of these maintenance costs were estimated to be related to corrosion. The Director of the Corrosion Office is responsible for the prevention and mitigation of corrosion of DOD equipment and infrastructure. The Director’s duties include developing and recommending policy guidance on the prevention and mitigation of corrosion to be issued by the Secretary of Defense, reviewing the CPC programs and funding levels proposed by the Secretary of each military department during the annual internal DOD budget review process, and submitting recommendations to the Secretary of Defense regarding those programs and proposed funding levels. In practice, this review includes the process of selecting projects proposed by the military departments for funding. In addition, the Director leads the CPC Integrated Product Team, which is comprised of representatives from the military departments to accomplish the goals and objectives of the Corrosion Office, and includes the seven Working Integrated Product Teams (Product Teams) that implement CPC activities. These seven Product Teams are: policy and requirements; metrics, impact, and sustainment; specifications, standards, and product qualification; training and certification; communications and outreach; science and technology; and facilities. Until fiscal year 2011, the Corrosion Office consisted of the Director and contractor support. The Director told us that 4 full-time staff were expected to be hired in early fiscal year 2011. The Corrosion Office funds projects and activities aimed at preventing and mitigating corrosion. Projects are specific CPC efforts with the objective of developing and testing new technologies. To receive Corrosion Office funding, the military departments submit project proposals that are evaluated by a panel of experts assembled by the Director of the Corrosion Office. The Corrosion Office currently funds up to $500,000 per project, and the military departments pledge complementary funding for each project they propose. The level of military department funding and the estimated ROI are two of the criteria used to evaluate the project proposals. (See app. II for examples of CPC projects.) Activities encompass efforts, such as training and cost studies, to enhance and institutionalize CPC efforts within DOD. These activities are coordinated through the seven Product Teams discussed above. Product Team representatives told us that funding for these activities is centrally coordinated through the Corrosion Office in consultation with the Product Teams. According to the Corrosion Office, constrained budgets and competing requirements to support worldwide military operations have precluded the full funding of CPC projects that have met the requirements for funding. In April 2010, we reported on the funding available to the Corrosion Office for projects and activities. For fiscal years 2005 through 2010, the Corrosion Office accepted 271 CPC projects with funding requests totaling $206 million, but DOD provided $129 million, or 63 percent of the funding required for the Corrosion Office to fund all 271 projects. As a result, the Corrosion Office funded 169 CPC projects over this 6 year period. As represented in Figure 1, the historical funding rates for CPC projects have fluctuated during fiscal years 2005 through 2010. During the same 6 year period, the Corrosion Office also funded a total of $26 million in corrosion- related activities such as training, outreach, and costs of corrosion studies. In April 2010, we reported that the CPC requirements for fiscal year 2011 totaled $47 million, but the fiscal year 2011 budget identified $12 million for CPC, leaving an unfunded requirement of about $35 million. Additionally, we reported that the funding level identified in the fiscal year 2011 budget request could result in a potential cost avoidance of $418 million. Similarly, multiplying the average estimated ROI by the amount of the unfunded requirements shows that DOD may be missing an opportunity for additional cost avoidance totaling $1.4 billion by not funding all of its estimated CPC requirements. Both calculations are highly contingent on the accuracy of the estimated ROIs that have not been validated by the military departments. (See the Related GAO Products section at the end of this report for a full listing of our reports on DOD’s CPC program.) The acceptance of military departments’ CPC project proposals varied relative to the nature of review—if any—that the Corrosion Executives required before proposals were submitted to the Corrosion Office for funding consideration. The military departments have established Corrosion Executives to oversee CPC efforts, but their level of oversight varies. The Duncan Hunter National Defense Authorization Act for Fiscal Year 2009 requires the Corrosion Executive of each military department to serve as the principal point of contact between the military department and the Director of the Corrosion Office. It also requires each Corrosion Executive to submit an annual report to the Secretary of Defense containing recommendations pertaining to the military department’s CPC program, including corrosion-related funding levels necessary to carry out all the Corrosion Executive’s duties. In addition, DOD Instruction 5000.67, Prevention and Mitigation of Corrosion on DOD Military Equipment and Infrastructure, which was updated in February 2010, reflects certain legislative requirements and provides Corrosion Executives with responsibility for certain CPC activities in their military department. It requires the Corrosion Executives to submit CPC project proposals to the Corrosion Office with coordination through the proper military department chain of command, as well as to develop and support an effective CPC program in their military department, evaluate the CPC program’s effectiveness, serve as the principal point of contact with the Corrosion Office, and establish a process to review and evaluate the adequacy of CPC planning. We have reported that a key factor in helping achieve an organization’s mission and program results and minimize operational problems is to implement appropriate internal control. Effective internal control also helps in managing change to cope with shifting environments and evolving demands and priorities. Control activities such as the policies, procedures, techniques, and mechanisms that enforce management’s directives, are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and achieving effective results. For an entity to run and control its operations, it must also have relevant, reliable, and timely communications relating to internal as well as external events. During the annual process of identifying and submitting CPC project proposals for funding consideration, each Corrosion Executive exercises a different level of review prior to submission of the proposals to the Corrosion Office. For example, the Army and Navy Corrosion Executives organized and directed a review of their department’s project proposals prior to submitting them to the Corrosion Office for fiscal year 2011 CPC funding, but the Air Force Corrosion Executive’s preliminary oversight was more limited. The Army Corrosion Executive requested the various Army commands to submit abbreviated project proposals 5 weeks prior to the application deadline set by the Corrosion Office. Individuals nominated by the Army commands then reviewed these abbreviated proposals by using criteria the Army adapted from the project selection evaluation charts included in DOD’s Corrosion Prevention and Mitigation Strategic Plan. The Corrosion Executive’s office provided the results from this internal peer review to the authors of the proposed projects, so that comments obtained from the review could be incorporated into the project proposals before the Corrosion Executive submitted the projects to the Corrosion Office. Army staff told us that some authors withdrew their project proposals following this review, based on the feedback they received. The Navy Corrosion Executive directed a similar review process, requiring that a one-page synopsis of each project proposal be prepared and submitted to him 7 weeks prior to the Corrosion Office deadline. The Corrosion Executive assembled a panel with members from each of the Navy’s system commands to review the synopses. Specifically, individuals from other system commands reviewed and scored the synopses from the remaining commands based on the synopses’ alignment with the Navy’s priorities, and the estimated ROI. The Navy Corrosion Executive then ranked the synopses based on the aggregate scores received from each reviewer. A Navy project manager told us that receiving a low ranking did not preclude project proposals from being submitted to the Corrosion Office, because the Navy Corrosion Executive did not discourage the managers of these projects from submitting the full proposal to the Corrosion Office for funding consideration. We found that the Air Force Corrosion Executive did not direct a similar level of review and feedback for project proposals before they were submitted to the Corrosion Office for fiscal year 2011 funding. The Air Force Corrosion Executive requested that the Air Force major commands submit project proposals to his office prior to submitting project proposals to the Corrosion Office. However, the Air Force Corrosion Executive did not establish a process to review the proposals and provide preliminary feedback for revising them before submission to the Corrosion Office. The Air Force Corrosion Executive told us that he did not conduct a review of the proposals because, due to the historically low rate of Air Force CPC projects accepted for funding, he thought it was appropriate to submit all of the Air Force proposals to the Corrosion Office. He also said that since the Corrosion Office is more familiar with the criteria used to judge the proposals he did not want to reject any project proposals. According to a member of the Corrosion Office’s project selection panel, the additional steps taken by Army and Navy Corrosion Executives to ensure that their military department’s proposals met the panel’s criteria were contributing factors for a higher acceptance rate for Army and Navy proposals. The project selection panel found during the preliminary evaluation step of the proposal selection process that 66 percent of the Army project proposals and 61 percent of the Navy project proposals submitted for fiscal year 2011 funding were acceptable in their current form, while 11 percent of the Air Force projects were considered acceptable (see table 1). The panel member also told us that the Army and Navy fiscal year 2011 proposals were more complete and more effectively addressed the selection criteria than those submitted by the Air Force. For example, most of the Air Force project proposals lacked required information needed for the project selection panel to judge the merits of the proposal. The panel’s feedback to the authors of the Air Force project proposals highlighted areas where the provided information was insufficient or incomplete, such as the project managers did not follow the project proposal template in the DOD Corrosion Prevention and Mitigation Strategic Plan, which includes topics to be addressed in project proposals; the contents of the project proposals did not explain the technology demonstration aspects of the project; or the project proposals did not include information on matching funds that would be provided by the Air Force. The project selection panel also concluded that most of the Air Force’s fiscal year 2011 project proposals were requests for replacement funds, rather than the technology demonstrations that the Corrosion Office’s CPC program is intended to support. Selection panel members questioned if a review had occurred by the Air Force Corrosion Executive because these deficiencies were not identified and corrected prior to submitting the project proposals to the Corrosion Office for funding consideration. For fiscal year 2011, the Corrosion Office used a rigorous multistep process to review and select CPC project proposals that were acceptable for funding; however, some military department personnel involved in the process did not clearly understand the criteria used to select projects for funding. A project selection panel reviewed submitted project proposals from each military department at two different times. For the preliminary review, the panel used a set of criteria that is different from those used for final project selection later in the process. For the final review, the panel used criteria that are found in the DOD Corrosion Prevention and Mitigation Strategic Plan but not explicitly identified as the specific criteria used to evaluate CPC projects. Corrosion Executives and several authors of the project proposals told us they were not clear on what the criteria were or when they were used. For the fiscal year 2011 project review and selection, we observed that the Corrosion Office used a rigorous multistep process to determine if proposed projects were acceptable for funding. Step 1: In mid-June 2010, the military departments submitted 81 CPC project proposals to the Corrosion Office, as shown in table 1 above. At this point, Corrosion Office support staff assembled the project plans into binders for review by the project selection panel convened by the Director of the Corrosion Office. The fiscal year 2011 panel had five members: the Director, Corrosion Office (chair); Associate Director, Materials and Structures, Office of the Director, Defense Research & Engineering (vice-chair); and an official from each of the following organizations within the Office of the Under Secretary of Defense (Acquisition, Technology and Logistics): Defense Acquisition University; Installations and Environment; and Logistics and Materiel Readiness, Maintenance Policy and Programs. Step 2: In mid-July 2010, 2 weeks after project information was provided to the panel, the panel members assembled for their preliminary evaluation of the proposals. This preliminary evaluation, which we observed, was conducted at a meeting immediately prior to the annual DOD Corrosion Forum and resulted in projects being designated as either a “go” (meaning that the projects are deemed acceptable in their current form) or a “no go” (meaning that the projects require additional information or changes in scope to be acceptable to the panel). We observed that the panel used criteria for this preliminary evaluation that are not made available to the submitters of project proposals and are different from those used for final project selection later in the process. Step 3: Following the preliminary evaluation and during the Corrosion Forum, the panel held individual feedback sessions with project managers from the military commands, such as Naval Air Systems Command, Army Aviation and Missile Command, and Air Force Civil Engineer Support Agency, so feedback could be done in person. The panel provided feedback on each project, regardless of whether it was designated as a “go” or “no go.” A panel member told us that the panel provided feedback on all projects so that project managers could address—if they choose to do so—any perceived weaknesses in their “go” projects and improve their ranking in the final evaluation, as well as revise the “no go” project submissions. Following the feedback, the project managers had three options: prepare and submit information addressing the feedback provided by the panel, re-submit project proposals in their original form, or remove projects from consideration for that year’s funding process. Project managers told us that they sometimes decide to remove their “no-go” projects from consideration and that the military departments may implement such projects using other funding. A project selection panel member told us that if a project manager decided to modify a project proposal to address the panel’s feedback, this modified proposal was due to the Corrosion Office no later than 2 weeks after the feedback session. Upon receipt of any revised proposals, the panel conducted another review of all proposals (original and resubmitted), which involved each panel member independently scoring the projects on judgmental criteria and providing written comments. Step 4: In mid-August 2010, Corrosion Office support staff used an analytical tool to rank the projects based on the average of the scores recorded by each panel member for eight criteria: the five judgmental criteria above and three quantitative criteria—ROI, Corrosion Office funding as a percentage of total project cost, and the project performance, or implementation, period. Step 5: Following the ranking of projects using the analytical tool, the selection panel reconvened for a final evaluation of the projects. The panel arranged the ranked list that resulted from the analytical tool described above into four categories: best, acceptable–prioritized for funding, acceptable–not prioritized, and not acceptable. According to the staff, the “best” projects would likely all be funded, the “acceptable–prioritized for funding” projects would be funded by priority until the Corrosion Office funding is exhausted. Corrosion Office support staff informed the panel that, based on historical funding levels, they anticipated having $7 million in available funding for CPC projects in fiscal year 2011. The panel identified 30 of the 53 accepted projects that it anticipated would be funded following completion of DOD’s fiscal year 2011 budget process. These 30 projects included the 20 projects categorized as “best” and 10 projects in the “acceptable–prioritized for funding” category. We observed that the panel then reviewed the projects that were within the anticipated funding level to ensure a balance between the number of facilities and weapons projects identified for funding. In the meeting we observed, no adjustments to the final ranking were necessary to ensure this balance. Corrosion Office officials told us that projects are evaluated based on the eight criteria that they believed were clearly listed in the DOD Corrosion Prevention and Mitigation Strategic Plan (and discussed above), yet some project managers told us they were unaware of these criteria. We have previously reported that a key business practice for performance management is the early and direct involvement of stakeholders. We have also reported that leading results-oriented organizations believe strategic planning is not a static or occasional event but rather a dynamic and inclusive process. For example, we noted that stakeholder involvement is important to help agencies ensure that their efforts and resources are targeted at the highest priorities. We found that some military department stakeholders—including the Corrosion Executives and project managers who submit project proposals—had limited familiarity with the criteria to evaluate projects for CPC funding. As described above, the selection panel used a different set of criteria to make the preliminary “go/no-go” decision than the set used for the final evaluation and decision. Corrosion Office officials told us that they believed these criteria were clearly listed in the DOD Corrosion Prevention and Mitigation Strategic Plan, but we found that only some of the criteria used to evaluate CPC project proposals were clearly found in the Strategic Plan. Further, the criteria identified by the Corrosion Office officials were grouped in the Strategic Plan with other criteria not used for the project selection process. Two of the six project managers with whom we met told us that they were unfamiliar with the criteria used to assess CPC projects. The other four project managers said that they became familiar with the criteria by attending the DOD Corrosion Forums, discussing projects with the panel during previous years’ feedback sessions, or learning about the criteria from other project managers—not by reading the DOD Corrosion Prevention and Mitigation Strategic Plan. Some project managers told us that project managers who are new to the process of applying for CPC funding would have difficulty understanding the criteria sufficiently to prepare a successful project proposal. Also, the Corrosion Executives told us that they were unfamiliar with the criteria used by the project selection panel to prioritize projects for funding. For example, the Air Force Corrosion Executive told us that he did not review CPC projects prior to submitting them to the Corrosion Office for funding consideration because he was not sufficiently familiar with the criteria used by the Corrosion Office to select projects. During our observations of the project selection panel process, we identified several conditions that show communication between the Corrosion Office and the military department stakeholders is not as clear as it could be. Criteria used for project selection are not clearly identified in the Corrosion Prevention and Mitigation Strategic Plan. The Strategic Plan includes an attachment with seven project assessment charts that the Strategic Plan states are “not to be filled out and submitted” with the project proposal and “will not be used to score projects, although they may be used as a guide” for the preliminary and final project evaluations. However, we observed the project selection panel using one of the topics described in the assessment charts (ROI) to make project acceptance decisions. Further, it appeared that certain criteria were more important for project acceptance than others, even though this difference in importance was not identified in the Strategic Plan. For example, during the project selection meetings we observed, the proposed projects’ estimated ROI appeared to be a very important criterion in the panel’s decision-making process. Also, we observed that the ratio of funding requested from the Corrosion Office to that provided by the military department was often cited by the project selection panel as a reason for scoring a project higher or lower, even though the Strategic Plan does not explicitly mention this criterion. The panel also assessed some projects using criteria that were not listed in the Corrosion Prevention and Mitigation Strategic Plan. Specifically, the extent to which past projects had used similar technology and the extent to which a proposed project’s location previously experienced difficulties with project implementation both factored in part into the selection panel’s decisions about whether to accept projects for funding, even though these criteria are not listed in the Strategic Plan. The project selection process did not incorporate the priorities of the military departments, even though the Navy provided this information to the panel for the fiscal year 2011 selection process. Corrosion Executives and project managers told us they believed that it was appropriate for the project selection panel to consider the priorities of the military departments, as each department was required to provide matching funds for proposed projects. However, a selection panel member and Corrosion Office officials told us that they disagreed with this view, and added that the CPC program was intended as a technology demonstration program with the goal of awarding funds to the most competitive projects, regardless of department priorities. The military department stakeholders’ limited knowledge and understanding of the selection criteria could be a challenge for the Corrosion Office in accomplishing the stated purpose of the Strategic Plan to articulate policies, strategies, objectives, and plans that will ensure an effective, standardized, affordable DOD-wide approach to prevent, detect, and treat corrosion and its effects on military equipment and infrastructure. This situation makes it difficult for stakeholders to craft effective project proposals because they are unsure about the criteria that the project selection panel uses to make decisions on which projects to accept for funding. The military departments have completed a third of their required ROI validations for projects funded in fiscal year 2005, but completion of the remaining projects’ validations for that year is behind schedule. Guidance in the DOD Corrosion Prevention and Mitigation Strategic Plan describes the steps to be taken to initially estimate the ROIs for CPC projects submitted for funding by the Corrosion Office. These estimation steps include (1) calculating the project costs—such as up-front investment costs and operating and support costs, (2) calculating the benefits that are expected to result from the project—such as reduction of costs like maintenance hours and inventory costs, and (3) calculating the net present value of the annual costs and benefits over the projected service life of the proposed technology. The DOD Corrosion Prevention and Mitigation Strategic Plan notes that follow-on reviews of completed projects are required and that the reviews are to focus on validating the project’s ROI. Corrosion Office officials told us that because the CPC projects are generally funded for 2 years of implementation and ROI validations are required within 3 years of completing the project’s implementation, reviews for projects funded in fiscal year 2005 are due by the end of fiscal year 2010. The ROI validations consist of reviewing assumptions used earlier in computing the estimated ROI; updating the costs and benefits associated with the new technology resulting from the project; recalculating the ROI based on validated data; and providing an assessment of the difference, if any, between the estimated ROI and the validated ROI. The military departments have completed these reviews, including the ROI validations, for 10 (36 percent) of the 28 implemented projects funded in fiscal year 2005. For these 10 projects, the average ROI ratio was validated as 12:1, slightly higher than the average estimated ROI of 11:1 for these projects when they were originally proposed. While the agreement between the average estimated and validated ROIs is encouraging, the small number of projects—overall and by type of project—does not allow these findings to be generalized. Nine of these ten CPC projects with validated ROIs were focused on corrosion in facilities, and facilities projects accepted by the Corrosion Office for funding have historically had lower estimated ROIs than CPC equipment projects. Specifically, for CPC projects funded in fiscal year 2005, the facilities projects had an estimated average ROI of 13:1, while the equipment projects had an estimated average ROI of 67:1. Figure 2 shows the estimated average ROIs for projects funded in fiscal years 2005 through 2010. Both Corrosion Office and military department officials conceded that they are behind schedule on completing ROI validations for fiscal year 2005 projects. Army and Navy corrosion officials told us that, because CPC funding is awarded for a 2-year project implementation period, they typically do not have sufficient funds remaining for validating the ROI after projects are implemented. However, the Army group that conducts CPC projects for facilities has completed 8 of its 9 required ROI validations for projects funded in fiscal year 2005. According to an Army official, this group has historically been allocated $5 million annually for CPC activities. The Corrosion Office Director told us they are aware of the military departments’ difficulties in completing the validations and are considering budgeting DOD-wide CPC funds for ROI validation. If this action is taken, funding would go to the Product Team responsible for CPC metrics for the team to allocate to ensure completion of the validations. Because the military departments have not completed the required validations of ROI estimates, DOD and the military departments are unable to fully demonstrate the costs and benefits of the CPC projects. One project selection panel member told us that the lack of completed ROI validations makes it more difficult for the panel to make decisions about how to change project selection criteria to invest limited funds in the types of projects with the greatest benefits. Moreover, the continued access to limited evaluative data prevents DOD from making better informed decisions about the amount of funding for the Corrosion Office’s CPC program, as well as where best to invest CPC funds. The Corrosion Office has created seven Product Teams to propose and implement DOD-wide CPC activities in seven areas, as discussed earlier. Using volunteers from the military departments, the Product Teams propose activities, such as determining the costs of corrosion, which are then selected for funding. In the past, product team members served on an informal voluntary basis with little involvement from the military departments. However, now that each department has a Corrosion Executive, the process for selecting the Product Teams’ members is changing. According to a Product Team member, the Product Teams convene during the DOD Corrosion Forums held twice each year and coordinate activities by email and through the Corrosion Office Web site during the rest of the year. For example, at the July 2010 DOD Corrosion Forum that we observed, the Product Teams presented their activities to the attendees, discussed their progress on the activities, and prepared a set of goals for actions to be completed before the next Corrosion Forum. The Product Teams’ action plans are included in the DOD Corrosion Prevention and Mitigation Strategic Plan and are updated annually. The Product Teams are staffed by representatives from the military departments, and Corrosion Office staff and the Product Team representatives told us that an informal process is used to fund the CPC activities implemented by the Product Teams. Specifically, each year the Director of the Corrosion Office asks the Product Team chairs to provide details on the funding required for the activities planned for the next year. The Director then requests the funds through the annual budget request submitted to the DOD Comptroller. Product Team representatives told us that they were satisfied with the level of funding provided for CPC activities. Table 2 lists the funding for each Product Team for fiscal years 2005 through 2010. The tasks completed by the Product Teams vary according to their area of specialization. Descriptions of two Product Teams’ tasks and impact are used to illustrate the specialization and important information generated. The Metrics, Impact, and Sustainment Product Team has focused on determining the baseline costs of corrosion for DOD. This task involves establishing a methodology to measure the costs associated with corrosion throughout DOD and applying the methodology to selected components of the military departments (such as Army aviation and missiles, and Navy ships). These efforts resulted in a series of reports that estimated the cost of corrosion for various classes of equipment and facilities across the military departments. A project manager with whom we met told us that these cost studies helped him and his colleagues to identify areas in which to focus their CPC efforts. He told us that the Army Aviation and Missile Command established a corrosion team to focus on cost drivers, following the issuance of a cost study that estimated Army aviation and missile assets had corrosion costs of $1.6 billion per year. This Product Team plans to update the cost of corrosion for each military department component on a 3-year cycle and to use this information to track the impact of CPC efforts over time. This Product Team also has ongoing efforts to measure the impact of corrosion on readiness. A preliminary report, published in October 2009, concluded that corrosion- related factors can cause asset unavailability of up to 16 percent, with the greatest impact occurring on aviation assets. One Product Team representative told us that (1) their studies on corrosion costs were completed prior to the Corrosion Executives’ being established at the military departments and (2) the Product Team plans to consult with the Corrosion Executives to incorporate their input into future updates to the cost studies. He told us that he expected this would have a positive impact at the military departments. In addition, the Specifications, Standards, and Product Qualification Product Team has developed a Web-based tool to help suppliers match their products with existing specifications and standards used by DOD. A Product Team representative told us that this activity is expected to result in improved technologies and products available to the DOD maintenance community for use in preventing corrosion. Additionally, the Product Team representative told us that product specifications are required to be updated every 2–5 years and that these updates cost DOD up to $20,000 each. He told us that there are over 800 corrosion-related product specifications, such as information on what types of treatments, primers, and paints are to be applied to a particular material in a given situation. Because of the large number of specifications involved and the cost of revising each of them, this Product Team has focused its efforts on assembling a list of 38 “high-risk” specifications that are given priority for funding. The Corrosion Executives of the military departments are responsible for supporting the Product Teams, which are part of the CPC Integrated Product Team, and the Product Team staffing process is evolving to recognize their emerging roles and responsibilities. Since February 2010, the Corrosion Executives have been required by DOD Instruction to support the Product Team process by designating trained or qualified representatives. According to the DOD Corrosion Prevention and Mitigation Strategic Plan, the Director of the Corrosion Office manages and coordinates the CPC Integrated Product Team, which includes the Product Teams. The Strategic Plan does not reflect this new requirement for the Corrosion Executives to designate representatives to the Product Teams. The Corrosion Executives and two of the Product Teams’ chairs told us that the process of staffing the Product Teams is changing. According to the Navy Corrosion Executive, in the past, participation on a product team has always been based on individual interest and whether a volunteer had time available to dedicate to a Product Team. However, recently, when a Navy representative who was serving as the chair of a Product Team asked to be replaced, the Navy Corrosion Executive nominated another individual from the Navy to serve on the Product Team. The Corrosion Executive communicated the nomination to the Director of the Corrosion Office and the Corrosion Executives of the Army and Air Force, and there were no objections to the change. The Navy Corrosion Executive told us that this example is typical of the informal process currently used to staff the Product Teams. He added that the Corrosion Executives have met with the Director of the Corrosion Office to discuss establishing a Corrosion Board of Directors, which could establish regular meetings between the Corrosion Executives and the Director of the Corrosion Office to discuss policy issues, including a more formal process of staffing the Product Teams. While the Corrosion Office has, in the past, relied on the Product Team members to represent the position of the military departments on corrosion-related issues, the Corrosion Executives told us they felt that it was now more appropriate for such discussions to occur between the Director of the Corrosion Office and the Corrosion Executives directly. However, the Air Force has recently designated particular Product Team representatives from their military department as authorized to speak for the department in communications with the Corrosion Office. The Air Force Corrosion Executive told us that this designation was intended to prevent any miscommunication between Product Team representatives and the Corrosion Office. Product Team members with whom we spoke had mixed reactions to the involvement of the Corrosion Executives in the Product Teams. One member told us that he felt it was appropriate for the Product Teams to be staffed by volunteers and was concerned that an increased role by the Corrosion Executives in designating members to the Product Teams would reduce the commitment of the members to the Product Teams. In contrast, another Product Team member told us that he thought it is good for the Corrosion Executives to be more involved, because it is important to ensure that the Corrosion Executives have buy-in to the Product Team activities. Corrosion significantly impacts DOD in terms of cost, readiness, and safety. The Corrosion Office has made substantial progress toward establishing a coordinated DOD-wide approach to controlling and mitigating corrosion, including creating a process to select and fund projects intended to develop and use new CPC technologies, quantifying the costs of corrosion, and working more closely with the military departments. Also, each military department has recently designated a legislatively mandated Corrosion Executive to manage and coordinate its corrosion efforts and give increased visibility to this important area of equipment and infrastructure sustainment. However, some continuing uncertainty about how the Corrosion Executives should fulfill their responsibilities may be limiting the positive impact that these positions could have on CPC efforts. For example, the nature and extent of reviews of CPC proposals before they are submitted to the Corrosion Office were cited as a possible cause for differences in the rates at which the military departments’ proposed projects are selected for supplemental funding from the Corrosion Office. Similarly, some issues with how clearly the criteria used to select projects for funding are communicated may have negative effects. These effects include significant revisions to project proposals and can result in fewer projects being accepted. If these concerns are not addressed, DOD and the military departments may not achieve maximum benefits from the program and thereby limit the effects of corrosion on the assets that they manage. An additional area of concern is the limited follow-through on the requirement to validate the ROIs that were originally estimated for the funded projects. While the few validations completed thus far document positive results, the small and non- representative group of findings prevents (1) generalization about the impact of other funded projects and (2) efforts to identify and focus future funding toward types of projects that have been shown to have the best likelihood for high payoffs. Also, more complete information on ROIs could provide DOD with an empirical basis for determining how, if at all, the Corrosion Office’s funding and activities should be modified. To ensure that the Department of Defense is taking full advantage of the cost savings that can be achieved by implementing CPC projects, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology and Logistics) to take the following three actions: Update applicable guidance, such as DOD Instruction 5000.67, Prevention and Mitigation of Corrosion on DOD Military Equipment and Infrastructure or the DOD Corrosion Prevention and Mitigation Strategic Plan to further define the responsibilities of the military departments’ Corrosion Executives, to include more specific oversight and review of the project proposals before and during the project selection process. Modify the DOD Corrosion Prevention and Mitigation Strategic Plan to clearly specify and communicate the criteria used by the panel in evaluating CPC projects for funding consideration. This action should include listing and describing each criterion used by the panel in the preliminary and final project evaluation decisions and discussing how the criteria are to be used by the panel to decide on project acceptability. Develop and implement a plan to ensure that return on investment validations are completed as scheduled. This plan should be completed in coordination with the military department Corrosion Executives and include information on the time frame and source of funding required to complete the validations. In written comments on a draft of this report, DOD agreed with one of our recommendations and did not agree with the other two recommendations. DOD’s letter also provided some technical comments that we have incorporated as appropriate. For example, DOD’s comments noted some new information that the department had not shared with us previously. Therefore, we revised our report to reflect the fact that DOD now estimates that approximately $1.9 billion, or 11.7 percent, of facilities’ maintenance costs are related to corrosion. We have also revised our report to reflect additional information the department provided on how the Product Teams are staffed. DOD’s comments are included in their entirety in appendix III. DOD did not agree with our recommendation that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology and Logistics) to update applicable guidance, such as DOD Instruction 5000.67 or the DOD Corrosion Prevention and Mitigation Strategic Plan, to further define the responsibilities of the military departments’ Corrosion Executives, to include more specific oversight and review of the project proposals before and during the project selection process. In its comments, DOD stated that DOD-level policy documents are high-level documents that delineate responsibilities to carry out the policy. Specific implementing guidance is provided through separate documentation. DOD also stated that the Corrosion Office will be updating the DOD Corrosion Prevention and Control Planning Guidebook and beginning the process of converting it into a DOD manual in the next year. In addition, DOD’s response noted that the “best practice” of the military department Corrosion Executives conducting their own internal reviews before and during the project selection process will be included in that update. Our recommendation to “update applicable guidance” did not prescribe where the updated guidance should be made. Instead, our recommendation only offered examples of documents that might be modified. We believe that updating the Guidebook and converting that to a DOD Manual would provide the needed direction to the military department Corrosion Executives and would meet the intent of our recommendation. DOD also did not agree with our recommendation that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology and Logistics) to modify the DOD Corrosion Prevention and Mitigation Strategic Plan to clearly specify and communicate the criteria used by the panel in evaluating CPC projects for funding consideration, as well as listing and describing each criterion used by the panel in the preliminary and final project evaluation decisions. In its response, DOD stated that it disagreed with the implications that the Strategic Plan is deficient in clearly specifying the criteria and that added discussion is needed in the Strategic Plan regarding how the criteria are used by the panel. DOD commented that the criteria used by the panel and the steps in the process are completely transparent to the authors, and the details have been verbally communicated to stakeholders and are available on line and by e-mail in Appendix D of the Strategic Plan. However, DOD also stated: (1) “While not always defined as ‘criteria,’ all factors considered in the evaluation are articulated in Appendix D” and (2) “While not expressly defined as ‘criteria,’ these indices are clearly criteria from which anyone submitting a project plan can determine what is likely to improve the chances of a higher DEA [the model used in the panel process] ranking.” In developing our findings, we analyzed the Strategic Plan to understand the process and criteria used to evaluate CPC projects for funding; observed the panel proceedings for both the preliminary and final project reviews; discussed the panel process with panel members and military department Corrosion Executives; and discussed their understanding of the process and the criteria used for project evaluation with Corrosion Executives and project authors. The views of the panel members, Corrosion Executives, and project authors, as well as our observations, formed our findings and conclusions and led to our recommendations. Despite the efforts of the Corrosion Office to communicate with its constituency through briefings, emails, and other methods as delineated in DOD’s comments, some of those involved in the process reported to us that they did not clearly understand what the criteria were and when they were used in the process. Moreover, DOD’s comments quoted above acknowledge that criteria are not always clearly defined in Appendix D of the Strategic Plan. We believe our findings are sound and that our recommendation to clearly identify and communicate the criteria is still appropriate. Continued use of unclear criteria could result in wasted personnel time associated with preparing and revising proposals. DOD agreed with our recommendation that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology and Logistics) to develop and implement a plan to ensure that return on investment validations are completed as scheduled. DOD stated that plans are underway to address this requirement. DOD also commented that some of our statements are inaccurate. For example, DOD claims that statements in the draft report regarding the use of different criteria for the preliminary and final project evaluation are not true. However, in our discussions with the panel members and project authors, as well as our observations of the panel process, it was clear that some criteria were used in one evaluation and not in the other. Second, DOD stated that the evaluation team is not an “ad hoc working group” and the panel members are selected based on experience, expertise, and judgment. In response to DOD’s comments, we modified our characterization of the panel. Finally, DOD commented that a statement in the draft report that the process did not consider military department priorities is not accurate. However, as we state in the report, both Corrosion Office staff and a panel member told us that it was not the intent of the CPC program to fund military department priorities, but to award funds to the most competitive projects. Also, DOD’s comments state that “the panel does not initially rank projects using the military department priorities” and assert that those priorities have been used by the panel in the final ranking if a military department has two or more projects that are considered to be comparatively equal. However, this is a relatively limited circumstance and, in the view of some stakeholders, does not adequately acknowledge the priorities of the military departments. We are sending copies of this report to the appropriate congressional committees. We are also sending copies to the Secretary of Defense; the Deputy Secretary of Defense; the Under Secretary of Defense (Comptroller); the Under Secretary of Defense (Acquisition, Technology and Logistics); the Secretaries of the Army, Navy, and Air Force; and the Commandant of the Marine Corps. This report will also be available at no charge on our Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512- 8246 or edwardsj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix IV. For the overall context of our analysis, we reviewed relevant laws; Department of Defense (DOD) and military department-specific guidance; the DOD Corrosion Prevention and Mitigation Strategic Plan; and reports issued by LMI and the Defense Science Board. To address our objectives, we met with the Director of the Office of the Secretary of Defense’s Corrosion Policy and Oversight Office (Corrosion Office), members of the Corrosion Prevention and Control (CPC) project selection panel assembled by the Director of the Corrosion Office, DOD contractors who assist the Director of the Corrosion Office in managing the CPC program, each military department’s Corrosion Executive and their staffs, representatives of three of the seven Working Integrated Product Teams (Product Teams) that coordinate CPC activities, and the six project managers who authored the proposals for 11 of the CPC projects included in our sample. We obtained data from the Corrosion Office for projects that the military departments had submitted for funding consideration for fiscal years 2005 through 2010. Projects submitted for fiscal year 2011 funding were not in that population because the Corrosion Office had not completed the funding of these projects at the time of our review. We assessed the reliability of the data by (1) interviewing staff knowledgeable about the data and the system that produces them; (2) testing for missing data, outliers, or obvious errors using comparisons to data obtained during prior GAO reviews; and (3) conducting logic tests. We determined that the data were sufficiently reliable for the purposes of our review, which were to determine how the military departments decide which projects to submit to the Corrosion Office for funding consideration, and how a panel of experts and the Corrosion Office decide which projects to approve for funding. To identify corrosion projects for a more detailed review, we selected a nonprobability sample of projects from each of fiscal years 2006, 2008, and 2010 using the following criteria: year the project was submitted to the Corrosion Office, whether the Corrosion Office did or did not accept the project, the Corrosion Office’s and military department’s combined project cost, and the estimated return on investment of the project. Applying the above criteria, we selected a sample of 24 projects for further review. To determine the extent the Corrosion Executives are involved in preparing CPC project proposals for submission to the Corrosion Office for funding consideration, we met with each of the Corrosion Executives and their staffs and reviewed the military departments’ corrosion reports, to identify whether there was a process at each department to review CPC projects. For projects in our sample, we interviewed six officials who were the principal authors and points of contact for 11 of the projects in our sample. We also reviewed legislation and military department documents, as well as guidance on internal controls, to identify relevant responsibilities and practices that could be used as criteria. To determine the extent the Corrosion Office has created a process to review and select projects for funding, we interviewed the Corrosion Office staff who manage the process of requesting and receiving project proposals from the military departments. We also interviewed some members of the project selection panel that decided which projects to accept for funding to obtain their observations on the evaluation and selection process. For projects in our sample, we reviewed records of the project selection panel’s decisions whether to accept the projects for funding. We observed the project selection panel’s preliminary and final project evaluation meetings for fiscal year 2011 projects to determine the current process for evaluating projects. Additionally, we reviewed the project proposal template included in DOD’s Corrosion Prevention and Mitigation Strategic Plan. To determine the extent the military departments have validated the return on investment (ROI) of funded projects, we obtained the 10 project review reports that had been completed for fiscal year 2005 projects. We reviewed these reports for data on the validated ROI, the comparison between the validated data and the original estimate, and information on the reasons— if applicable—why the ROI had changed. To determine how the Corrosion Office determines which CPC activities to fund, we interviewed the chairs of three of the seven Product Teams who manage the CPC activities. We also reviewed materials (e.g., cost studies) that the Product Teams produced, obtained information on the funding for the Product Teams and attended sessions at the DOD Corrosion Forum where Product Team representatives described their ongoing and planned activities. We conducted this performance audit from April 2010 through December 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Corrosion Office accepted the project and provided $48,000. Army Aviation and Missile Command staff told us that the project is still being implemented and that some units have deployed to the field. The Army Aviation Missile Command and the Naval Air Systems Command submitted a joint project proposal to demonstrate new technology using a laser powder deposition technique to repair knife edge seals that are components within the T700 engine. Almost all of the used (overhauled) seals wear enough to require repair or replacement. This new technology can reduce repair time and replacement of the seals. The T700 engine is used by the Air Force, Army, and the Navy. The military departments did not identify their funding contribution but requested $30,000 from the Corrosion Office. This Army-led project has an estimated ROI of 7:1. The Corrosion Office accepted this project and provided $30,000. Army Aviation and Missile Command staff told us that delays in obtaining Army funding have slowed the implementation of this project. The Naval Air Systems Command submitted this project proposal for a total cost of $2.7 million, of which 68 percent was requested from the Corrosion Office. The project has an estimated ROI of 14:1. Due to the high rate of corrosion-related replacement of antennas on the Navy’s F/A-18 Hornets and the cost of $2.5 million per year to replace the antennas, the project proposed developing a new generation of sealants to avoid corrosion on aircraft antennas and floorboards. The project was accepted but not funded by the Corrosion Office. Naval Air Systems Command staff told us that the project was funded by other sources and is in the early stages of implementation. The Navy and Army jointly submitted this project proposal with the Naval Air Systems Command as the lead organization. The project had a total cost of $470,000, with 74 percent requested from the Corrosion Office. The project’s estimated ROI was 2:1. This project would use Metallast technology to help provide more precise control of coating consistency, durability, and corrosion protection to improve the process of anodizing complex parts. Implementation would include installing new computer controlled anodizing systems at two Naval aviation depots, and also assessing the feasibility of a follow-on implementation at an Army depot. The project was accepted, but not funded by the Corrosion Office. Naval Air Systems Command staff told us that the project was funded by other sources, and has been completed. Naval Air Systems Command submitted this project proposal for a total cost of $550,000, with 82 percent requested from the Corrosion Office. Its estimated ROI was 1:1. The project proposal addresses implementation of a Plug and Coat sputtered aluminum system on an existing IVD aluminum system at the Naval depot in Jacksonville and to validate potential use in other naval aviation depots. The Plug and Coat system is a proven technical solution to access cavities and other internal surfaces of high-strength steel components and coat them with aluminum to protect against corrosion. The proposal said that the current process (1) consumes excessive man-hours to process parts and (2) leads to additional corrosion of components. The project was not accepted by the Corrosion Office. Naval Air Systems Command staff told us that the project was not pursued further. The Air Force Research Laboratory submitted this project proposal for a total cost of $560,000, with 54 percent requested from the Corrosion Office. Its estimated ROI was 605:1. The project plan proposed evaluating and testing several new paint spray gun systems using various types of existing coatings. Ease of use, economics, and the quality and uniformity of the finish coating would be compared for the various systems. The project was accepted but not funded by the Corrosion Office. According to laboratory officials, the project was not resubmitted because Air Force priorities changed and they did not believe it would rank above the funding line. The U.S. Army Natick Soldier Center submitted this project proposal for a total cost of $627,000, with an estimated ROI of 842:1. The project plan proposed demonstrating new processes for use of an alternative to copper 8 coating system now in use for protection against material bio-degradation. The proposed alternative was an environmentally friendly coating system for fabric protection for use on tents, truck covers, helmets, parachutes, and other materials. This project was accepted by the Corrosion Office but not initially funded. According to a center official, the project was eventually funded by the Corrosion Office. The project is complete and a final project report was recently sent to the Corrosion Office, but no ROI validation was conducted as part of the final report. The U.S. Army Corps of Engineers, Engineer Research Development Center, submitted this project proposal for a total cost of $1.6 million split evenly between the Army Corps of Engineers and the Corrosion Office, and estimated an ROI of 6:1. The initial project plan scope focused on testing remote monitoring of Army non-metallic bridges to help identify corrosion or degradation where ordinary nondestructive testing methods cannot identify actively growing defects. The Army expanded the scope of this project at the request of the Corrosion Office. As a result of the Interstate 35W Bridge collapse in Minneapolis, Minnesota, with corrosion and fatigue cracking likely contributors to the catastrophe, the Corrosion Office requested the Army to expand the scope of this project to include both non-metallic and metallic bridges. Because of this, the Corrosion Office waived the $500,000 funding limit for this project. Engineers stated that part of the project was to monitor the I-20 Bridge near Vicksburg, Mississippi. Expansion of the scope included coordinating with the Department of Transportation, Federal Highways Administration, and the Illinois and Indiana Departments of Transportation. Prior to the refocusing of the project, engineers told us that it was accepted with some additional clarification required. Engineers were in the process of resubmitting the project proposal when the Corrosion Office requested the wider scope. This project was accepted and funded. The project is three fourths completed. The Naval Facilities Engineering Service Center, Pacific submitted this project proposal for a total cost of $1.2 million, with $80,000 requested from the Corrosion Office. Its estimated ROI was 5:1. The project was to demonstrate the effectiveness of a discrete galvanic anode cathodic protection system as a means of mitigating corrosion and increasing the service life during the repair of the reinforced concrete Kilo Wharf at the Naval Base Guam. This project was accepted and funded. The project is still being implemented. Engineers told us that the project ran into some complications. For example, the sites where the project was installed are not the originally planned sites. The contractor estimates at the originally planned sites were much higher than the government estimates. Because of this the facilities command had to find a different site to use for project implementation. The Naval Facilities Engineering Service Center, Pacific submitted this project proposal for a total cost of $450,000, with 56 percent requested from the Corrosion Office. The estimated ROI was 2:1. The project was to test results of a technical paper reporting that an improved backfill and/or galvanic anode system may provide better cathodic protection than current impressed systems. A center official noted that the Navy removed this from funding consideration because (1) it could not find any matching funds and (2) there was no site selected to demonstrate the technology. The Naval Air Systems Command submitted this project proposal for a total cost of $940,000, with 29 percent requested from the Corrosion Office. The project’s estimated ROI was 2:1. The project was to evaluate alternative paint removal technology that could be used (1) where spot paint removal is necessary for non- destructive inspections and (2) at intermediate and depot-level facilities where larger scale removal of coating is required for inspections and repairs. This project was not accepted and not funded by the Corrosion Office. A command official noted that funding was obtained from other sources to complete this project. In addition to the contact name above, the following staff members made key contributions to this report: Ann Borseth, Assistant Director; Janine Cantin; Foster Kerrison; Charles Perdue; Terry Richardson; Michael Shaughnessy; and Erik Wilkins-McKee. Defense Management: Observations on Department of Defense and Military Service Fiscal Year 2011 Requirements for Corrosion Prevention and Control. GAO-10-608R. Washington, D.C.: April 15, 2010. Defense Management: Observations on the Department of Defense’s Fiscal Year 2011 Budget Request for Corrosion Prevention and Control. GAO-10-607R. Washington, D.C.: April 15, 2010. Defense Management: Observations on DOD’s Fiscal Year 2010 Budget Request for Corrosion Prevention and Control. GAO-09-732R. Washington, D.C.: June 1, 2009. Defense Management: Observations on DOD’s Analysis of Options for Improving Corrosion Prevention and Control through Earlier Planning in the Requirements and Acquisition Processes. GAO-09-694R. Washington, D.C.: May 29, 2009. Defense Management: Observations on DOD’s FY 2009 Budget Request for Corrosion Prevention and Control. GAO-08-663R. Washington, D.C.: April 15, 2008. Defense Management: High-Level Leadership Commitment and Actions Are Needed to Address Corrosion Issues. GAO-07-618. Washington, D.C.: April 30, 2007. Defense Management: Additional Measures to Reduce Corrosion of Prepositioned Military Assets Could Achieve Cost Savings. GAO-06-709. Washington, D.C.: June 14, 2006. Defense Management: Opportunities Exist to Improve Implementation of DOD’s Long-Term Corrosion Strategy. GAO-04-640. Washington, D.C.: June 23, 2004. Defense Management: Opportunities to Reduce Corrosion Costs and Increase Readiness. GAO-03-753. Washington, D.C.: July 7, 2003. Defense Infrastructure: Changes in Funding Priorities and Strategic Planning Needed to Improve the Condition of Military Facilities. GAO-03-274. Washington, D.C.: February 19, 2003.
Corrosion costs DOD over $23 billion annually, affects both equipment and facilities, and threatens personnel safety. DOD has taken steps to improve its corrosion prevention and control (CPC) efforts. These efforts include reorganizing the DOD-wide Corrosion Office and instituting Corrosion Executive positions in each of the military departments. In response to the Senate Appropriations Committee Report accompanying the fiscal year 2010 DOD appropriations bill, GAO evaluated to what extent (1) the Corrosion Executives are involved in preparing CPC project proposals for submission, (2) the Corrosion Office has created a process to review and select projects for funding, and (3) the military departments have validated the return on investment (ROI) for funded projects. GAO also reviewed the process the Corrosion Office uses to determine the CPC activities that it will fund. To carry out this study, GAO observed project selection panel meetings, interviewed corrosion officials, and reviewed documents and project proposals. The acceptance of the military departments' CPC proposals varied relative to the types of projects and nature of review that the military Corrosion Executives required before the proposals were submitted to the Corrosion Office for funding consideration. DOD guidance provides that Corrosion Executives coordinate CPC actions, including submitting corrosion project opportunities. Prior to submitting the proposals for a preliminary evaluation by the Corrosion Office's project selection panel, Army and Navy Corrosion Executives and staffs reviewed proposal summaries and provided feedback to the authors. The Air Force did not perform a review that included pre-submission feedback. Later, during a preliminary evaluation, the Corrosion Office's project selection panel determined that a much higher percentage of Army and Navy proposals were acceptable than those submitted by the Air Force. A selection panel member told us that because the Air Force did not perform a pre-submission review of proposals, deficiencies in those proposals were not corrected prior to the panel's evaluation. DOD has criteria and a rigorous multistep procedure for evaluating proposals, but some military department stakeholders indicated that this information is not communicated clearly. Previously, GAO noted involving stakeholders helps agencies target resources to the highest priorities. Criteria used for the project selection panel to evaluate proposed projects are not clearly identified in DOD's Corrosion Prevention and Mitigation Strategic Plan, and some project managers said that they were unfamiliar with how projects were evaluated. While the Corrosion Office already takes actions, such as providing in-depth feedback to proposals' authors and assembling corrosion experts to participate on the selection panel, unclear communications on some issues could adversely affect authors' abilities to prepare effective project proposals. The military departments are late in validating ROIs for some completed projects. The Strategic Plan suggests that follow-on reviews with validated ROIs are required for completed projects within 3 years after full project implementation. Project managers have completed these reviews for 10 of the 28 implemented projects funded in fiscal year 2005, with 8 of the 10 completed reviews performed by one Army command. Corrosion Executives told GAO that because CPC funding is awarded only for the 2-year project implementation period, they typically do not have funds remaining for validating ROIs after projects are completed. If the ROI validations of completed projects are not performed, the Corrosion Office will not have needed data to adjust project selection criteria in order to invest limited CPC funds in the types of projects with the greatest potential benefits. The Corrosion Office created Product Teams to implement DOD-wide CPC activities in seven areas. Using volunteers and a budget averaging around $4.5 million per year, the Teams propose activities, such as determining the costs of corrosion and DOD-wide specifications for CPC products, which are then selected for funding by the Director of the Corrosion Office. The Corrosion Executives are becoming more involved in Team activities. GAO is making recommendations to: 1) improve the oversight of proposals submitted for funding consideration, 2) communicate more clearly the criteria used to select which projects will be funded, and 3) fund and complete ROI validations. In written comments on this report, DOD disagreed with the first two recommendations and agreed with the third, citing alternatives or differing views. GAO believes the recommendations remain valid.
An airline “booking” occurs when a passenger reserves and purchases a seat for a trip. In 2002 in the United States, an estimated 255 million passengers flew more than 611 million flight segments (e.g., a traveler who flew between Baltimore, Maryland, and Portland, Oregon, who connected over Chicago both outbound and inbound represents a single passenger that flew four flight segments). Information included in the booking consists of the traveler’s name; an address; price and billing information; the full itinerary origins, destinations, and possible connecting airport with flight numbers and times; and perhaps other information as well, such as loyalty program (i.e., frequent flyer) information, including program status or seat and meal preferences. When a booking is entered in a computer system by a traditional travel agent, it is created in a GDS. The GDS- generated booking is then sent to the airline’s internal reservation system. The GDS charges an airline a “booking fee” based on the total number of flight segments in the traveler’s itinerary. For example, if a booking fee is $4 per segment and a passenger reserves and purchases an itinerary that consists of four flight segments (an outbound flight that connects over an airline’s hub to the ultimate destination and two similar return flights), the airline will be charged approximately $16 in booking fees. Changes made to the booking may cost extra for the airline. For example, if a passenger changes the day of his return flight, the airline may be refunded all but a fraction of its booking fees for those segments, and charged again for the booking of the new segments. Sometimes, a passenger may book an itinerary with an airline through a traditional travel agent, but may choose not to pay for the ticket pending a final decision on the trip. Such cases are called “speculative” or “passive bookings.” In an effort to maintain the booking as a service to the potential customer, a travel agent may continue to cancel and re-book the itinerary. Each cancellation and re-booking costs the airline (sometimes cancellations and re-bookings result in “churn”). The final cost to the airline is called a “net booking fee.” The precursors to GDSs, CRSs, first automated the selling of airline seats and the tracking of flight and schedule information for use by airline employees in the late 1960s. Beginning in the mid-1970s, these systems were offered to travel agencies. These CRSs were owned by (i.e., vertically integrated with) the airlines. American Airlines and IBM jointly developed a system called Sabre (Semi-Automatic Business Research Environment) to automate American’s bookings. United Airlines and TWA followed with Apollo and PARS, respectively. Delta and Eastern followed with DATAS II and System One, respectively. These CRSs replaced manual booking systems, and thus allowed the airlines to quickly and reliably process a large number of transactions. By extending use of the systems to travel agencies, airlines were able to reduce expensive telephone calls from travel agencies to airline reservation offices and were able to offer real time access to fares and inventory to its agency partners, improving the marketability of their services. Under airline ownership, certain CRS practices created competitive disadvantages for some carriers and often did not expose consumers to all available carrier options and prices. Before the industry was deregulated in 1978, interline travel-–a practice in which passengers fly on more than one airline to reach a destination--was common. To serve passenger needs, travel agencies also needed CRSs to provide information and booking capabilities on all airlines. However, CRSs did not treat every airline equally. An airline with its own CRS (“owner airline”) did not pay fees for booking passengers through that CRS, and it displayed schedule information in a way that favored its own flights at the expense of these other airlines—even if other airlines offered more direct service between two cities at less cost to the traveler. Typically, an owner airline would market its CRS to travel agencies in cities where it flew a significant number of flights. In the early 1980s, to expand CRS-travel agent market share in cities where they provided limited air service, owner-airlines developed “co- host” programs with other airlines that had a significant presence in targeted cities. In exchange for discounts on fees for bookings made on that CRS and for more prominent display of its flight information on the CRS computer screen, the co-host airline would market the owner airline’s CRS to its local travel agencies. Other airlines that were not co-hosts (“subscriber airlines”) would pay higher fees for any booking made on that CRS and continued to be disadvantaged by a bias in the display of their available flights. In essence, airline owners of CRSs used them to gain an unfair advantage in the marketplace, and struck deals with certain airlines giving them competitive advantages over other airlines. Figure 1 illustrates the typical financial transactions that took place among airlines, CRSs, travel agencies, and consumers prior to the enactment of the CRS rules. Owner airlines had an incentive to service as many travel agencies as possible in order to gain greater booking share. This, in part, is because CRSs benefit from economies of scale: CRS profits increase as passenger traffic and bookings increase, and both of those depend on access to more travel agents. While CRS market positions tend to be strongest in specific geographic areas consistent with their airline owners’ markets (and any markets they were able to negotiate from nonowner, or co-host, airlines), each U.S. GDS has developed a national, and subsequently, global footprint. In addition, owner airlines also recognized that travel agents’ familiarity and comfort with their CRSs produced something of a halo effect that gave owner airlines a greater share of bookings. While airlines paid commissions to travel agencies based on the value of the purchased tickets, carriers also encouraged travel agents to make additional passenger bookings by paying commission “overrides” to travel agencies for surpassing set sales goals. Though three domestic CRSs existed, an individual travel agent office typically relied on only one system. This was due in part to the multiyear, often exclusive, contracts under which they historically operated with CRSs. Using more than one system was also inefficient from the standpoint of most travel agents. These structural relationships produced two major effects: Because airlines—dependent on the systems—paid the booking fees, rather than the other users of the systems (travel agents and, ultimately, consumers), there was no competitive pressure constraining CRS booking costs. Airlines had little choice except to participate in each CRS, and CRSs did not have to compete for airline participants. As DOJ stated in comments submitted to DOT in 1989, each CRS constituted a separate market for air carriers because of the near-exclusive relationship with separate groups of travel agencies, and each is a monopolist with market power over carriers that want to sell tickets in areas where the CRS has a significant number of travel agencies. Thus, unless an airline was willing to forego access to those travel agencies and the consumers they served, it needed to participate in every CRS. To illustrate, consider Sabre’s relationship with American Airlines, and Galileo’s relationship with United Airlines. Because American has significant operations in the Dallas/Ft. Worth area, many travel agencies in Texas historically subscribed to Sabre, while United has similarly significant operations in Chicago and many travel agencies there likely were Galileo users. However, because American wanted to be available to travel agencies located in United’s traditional territory that subscribe to Galileo, it had to use Galileo as a CRS, as with other GDSs. Similarly, United wanted to be available to travel agencies in what was historically dominated by American in Texas and therefore had to be available on Sabre. Figure 2 illustrates the exclusive relationships that CRSs had with travel agencies, and the airlines’ dependence on each CRS to reach the most number of travel agencies. Prior to the enactment of the CRS rules, consumers only paid airfare, regardless of the complexity of the itinerary. Presumably, those airfares reflected the airlines’ total costs, including overhead expenses associated with ticket distribution. In 1984, the Civil Aeronautics Board (CAB), in one of its last official acts, adopted CRS rules to protect consumers and help ensure fair competition among airlines. The goal of these rules was to dissipate or constrain the power of the airlines and their CRSs to manipulate the competition for passenger traffic. DOT inherited the CAB’s duties, and in 1992 found that the rules were still necessary. DOT concluded that without them, CRS owners could use their control of the systems to prejudice airline competition, and the systems could bias their displays of airline services. Three main requirements in the CRS rules attempt to ensure that each owner airline and its CRS would treat other airlines equitably: Screens displaying flight information are not to favor one airline over another (“unbiased screens”); For the same level of service, prices for bookings must be the same for all airlines, including owner airlines, eliminating differences such as co- host or subscriber airlines (“price nondiscrimination”); and The “mandatory participation” rule requires airlines with a 5 percent ownership interest or more in a CRS (“owner airlines”) to participate in competing systems at the same level at which it participates in its own system. Figure 3 illustrates how the airline ticket distribution industry changed after the implementation of the CRS rules. DOT’s 1992 revisions to the CRS rules included a sunset date of December 31, 1997, which DOT subsequently extended to January 2004. DOT is currently reviewing additional possible revisions to the CRS rules. As CRSs evolved as corporate entities, they added other lines of business to the original airline ticket booking function. They currently book not only airline reservations, but also hotel, rental car, train, tour, and cruise reservations. CRSs also sell other professional services to airlines, such as software and Information Technology services for personnel and aircraft scheduling, and for baggage handling. CRSs provide outsourced internal reservation systems for airlines, as well. In the expansion of their activities they became known as GDSs, reflective of the increasingly international and diverse nature of travel they encompassed. Since the mid-1990s, U.S. airline owners have sold their shares in their GDS businesses. Three domestic GDSs have evolved to dominate the U.S. travel agent market: Sabre, Galileo, and Worldspan. Sabre became a separate legal entity of AMR Corp. (American Airlines’ parent company) in July of 1996, followed by an initial public offering of Sabre in October 1996; it has since been fully divested by AMR Corp. In 1997, Galileo International became a publicly traded company, and in 2001 became a subsidiary of Cendant Corp. Worldspan was sold in June 2003 to private investors. These changes ended the vertical integration of these airlines and GDSs. Figure 4 illustrates the GDS shares for all U.S. domestic bookings that relied on a GDS in 2002. Since the airlines began selling their shares in the GDSs in the mid-1990s, the ticket distribution system has undergone two major changes. These changes have helped airlines, faced with generally high operating expenses, cut distribution costs. First, airlines and others have increasingly sold and processed tickets through Internet-based applications (e.g., airline Websites, on-line travel sites), some of which bypass GDSs. These distribution methods are less expensive to the airlines than traditional travel agencies. Second, airlines have reduced commission payments to travel agents. At the same time, in response to overtures by large travel agencies, GDSs partially offset that reduction in airline commission payments by significantly increasing incentive payments to travel agents, on whom they depend to reach a large number of consumers. In part, these changes have enabled major airlines to reduce their total distribution costs by 25.8 percent from an average $732.9 million in 1999 to $543.6 million in 2002, or 43.6 percent on a per booking basis. However, these changes have not eliminated the airlines’ dependence on the GDSs for the selling of air tickets. Airlines continue to need to subscribe to each GDS to reach the universe of travel agents and potential consumers. Airlines have developed new Internet-based ticket booking processes that bypass GDSs and their associated booking fees. Others have developed Internet-based travel agencies that use GDSs to book tickets but whose bookings still cost airlines less than tickets booked through traditional travel agents. An increasing percentage of tickets are booked through the Internet, and an increasing percentage of bookings are made without the use of GDSs. The airlines have used the Internet to change the way bookings are processed by creating ways to work around the GDSs and their booking fees. Airlines have developed two basic ways to use the Internet to avoid the cost burden associated with standard GDS booking fees. First, airlines have developed their own Websites (e.g., www.continental.com) that allow consumers to reserve and book seats directly with airlines. Bookings made through these sites do not use a GDS booking function, and therefore do not incur booking fees. Rather, airlines maintain pricing, flight, and seat availability in their own internal reservation systems. For example, a booking made through Continental’s Website is processed by a data vendor that is not a GDS. Bookings made when a consumer telephones an airline’s “call center” (e.g., via a toll-free number such as Continental’s 1-800-523-FARE) are also routed through that same vendor. But, unlike call centers that rely on personnel to process bookings, airline proprietary on-line site bookings are processed electronically and therefore incur lower labor costs. Second, five major U.S. airlines collectively underwrote the development of a travel technology company called Orbitz. Because consumers can go to the Orbitz Website (www.orbitz.com) to query fare and schedule information for most major airlines as well as to book and purchase tickets, it performs similar functions as a travel agent. Orbitz now has two methods by which it books tickets, one of which uses a GDS and one of which bypasses GDSs and their associated booking fees. Originally, and in many cases still, Orbitz uses the Worldspan GDS to obtain airline availability data and to place the booking, and airlines pay booking fees to Worldspan for tickets booked in this manner. Orbitz receives volume-based rebates from Worldspan, flat transaction fees (approximately $5.34 charter associate fee or $10 per ticket from noncharter associates) from airlines, and it charges fees to consumers ($6 per ticket). Through Orbitz, however, some airlines can generate significant cost savings relative to traditional and on-line travel agent booking methods. “Charter airlines” have negotiated special arrangements with Orbitz, under which they receive rebates on a portion of the booking fee. According to Orbitz officials, these rebates generally save charter airlines about $3 of the approximate $16 paid in booking fees per ticket compared to bookings made through traditional travel agencies. Airlines that are not charter members of Orbitz pay the full Worldspan booking fee. These arrangements contrast with the CRS rules requirement of price nondiscrimination and mandatory participation, which have limited carriers’ ability to negotiate reduced booking fees with GDSs. Airlines are allowed to negotiate special arrangements with Orbitz because DOT has not defined Orbitz as a CRS, and thus did not extend the application of the CRS rules to cover Orbitz. Recently, Orbitz, with airline cooperation, has also developed technology that enables it to book tickets by directly accessing each participating airlines’ internal reservation system, bypassing the GDS and its booking fees. This technology, which (unlike the technology used to access an airline’s internal reservation system) can query and get information from multiple airlines, functions similarly to the technology used by GDSs. According to Orbitz officials, its new technology, which is called “Supplier Link,” could result in participating airlines saving about $12 of the typical $16 paid in booking fees per ticket. Since its implementation in 2002, 11 major airlines have signed up to participate in Supplier Link. As of July 2003, four airlines—America West, American, Continental, and Northwest-- have begun to use the technology. Currently, these airlines process over 70 percent of their Orbitz bookings through Supplier Link. These airlines’ remaining Orbitz bookings need to go through the Worldspan GDS because of their complexity. Complex bookings that cannot at this time be handled by Supplier Link might include bookings with itineraries that involve trips flown by interlining airlines (i.e., two or more airlines that collectively transport a passenger from origin to destination) or international destinations. In light of its new Supplier Link technology, Orbitz may be the first entity in the U.S. to perform functions similar to GDSs since finalization of the CRS rules in 1984. Furthermore, some believe that Orbitz represents a new entrant into the GDS market. However, Orbitz is a creation of the major airlines—as were the CRSs—and questions have been raised about whether Orbitz charter member airlines could use Orbitz to gain a competitive advantage over other airlines. DOT and DOJ have been involved in examining this issue. In its June 27, 2002, report to Congress, DOT found that Orbitz is not anticompetitive and more specifically, has shown no evidence of biased presentation of airline services. However, DOJ has not yet commented on the topic. As of July 2003, DOJ was continuing its review of Orbitz. Other participants in the airline ticket distribution industry have also developed Internet sites that, like traditional travel agencies, book tickets through a GDS. Sabre entered the Internet market by creating Travelocity, which is a web-based booking engine that uses the Sabre GDS to query and book tickets. In general, Travelocity functions as an on-line travel agent: airlines make payments to Travelocity as well as pay booking fees to Sabre. As with other travel agencies, consumers pay it ticketing fees. For accounting purposes, Sabre pays Travelocity incentive payments, but the payments stay within the parent company. Independent on-line travel sites have also emerged to sell airline tickets to consumers. One notable example is Expedia.com. In general, the relationships and flow of payments among Expedia.com, its GDS (Worldspan), airlines, and consumers resemble those of traditional travel agencies. Major independent on-line travel agencies continue to subscribe to a GDS and pay a subscription fee if they do not meet the high volume requirements for fee waivers. In turn, the GDS pays the on-line agency incentive payments for bookings, while charging airlines booking fees. In addition, some airlines make payments to these independent on-line travel agencies. Consumers also typically pay a $5-$10 fee to the new on-line sites for each ticket. In Expedia’s case, since it is Worldspan’s largest subscriber, it does not pay GDS subscription fees. Furthermore, since it books in such high volumes, it receives negotiated payments from its GDS and certain airlines. Other independent on-line travel agencies, sometimes referred to as “opaque” travel distributors, have also entered the airline ticket distribution industry, typically offering low-cost tickets to consumers in exchange for less flexibility or choice. Opaque travel distributors book through GDSs to sell what the industry refers to as “distressed inventory.” Analogous to a deep discount store or an outlet store, opaque distributors, such as Priceline.com, take bids from consumers for airline tickets. However, the consumer will know neither the carrier nor the exact departure times for his itinerary until after an airline accepts the consumer’s bid, and the ticket is bought and paid for. Despite the fact that airlines pay commissions and overrides as well as GDS fees for these on-line travel agency bookings, these bookings cost airlines less than bookings made through traditional travel agencies. This is in part because on-line consumers generally must purchase the ticket at the time of reservation, reducing “churn” that airlines claim is costly, by not allowing repeated bookings, cancellations, and rebookings prior to purchase. A traditional travel agent has the capacity to make changes to a consumer’s itinerary; however, for any changes to a reservation, additional GDS processing is required. GDSs charge the airlines a small amount for each cancellation and rebooking, so each such change adds to total airline distribution costs. In 1999, on average, each ticket booked via a traditional travel agent cost an airline a total of $45.93, compared to $23.40 and $25.12 for airline Website and on-line travel agency sites, respectively. Although costs associated with each of these distribution methods have decreased, bookings made through traditional travel agencies continue to cost much more than those made on line. From 1999 through 2002, the average cost to an airline for a booking made through a traditional travel agency decreased by 33 percent to $30.66, while the average cost to an airline for a booking on its own Website decreased by 50 percent to $11.75. Over the same period, the average cost to airlines for bookings made through on-line travel agencies decreased 23 percent to $19.43. Figure 5 illustrates the change in average airline distribution costs by the different distribution methods. Airlines have taken steps to encourage travelers to book tickets through less expensive, on-line distribution methods. Some airlines have instituted a fee for travelers who receive a paper ticket through a traditional travel agent. For example, Northwest charges a $50 fee for a paper ticket as opposed to electronic tickets. Airlines may also reward on-line bookers with loyalty incentives (i.e., frequent flyer program bonuses). For instance, travelers booking on line with American may earn up to 1,000 AAdvantage® Bonus miles. Airlines—both directly and through on-line travel agencies—have also offered special “Webfares” and last minute Internet-only deals to encourage consumers to book tickets on the Internet. While airlines continue to sell a significant proportion of their tickets through traditional travel agencies, the number of tickets sold through on-line distribution methods, including airline Websites and on-line travel agencies, has increased rapidly since the late 1990s. Between 1999 and 2002, on average, the percentage of tickets that consumers booked through traditional travel agents fell from 67 percent to 46 percent. By comparison, the percentage of tickets booked on line (using both on-line travel agencies and airlines’ own Websites) increased from 7 percent to 30 percent from 1999 to 2002. Throughout that same time period, airlines sold the remainder (roughly 25 percent) directly to consumers via their call centers (1-800 numbers). Figure 6 illustrates the change in distribution methods between 1999 and 2002. While business travelers generally continue to rely on traditional travel agents, trends suggest that leisure travelers are adopting the Internet as an alternative to traditional travel agents. The National Commission to Ensure Consumer Information and Choice in the Airline Industry (NCECIC) reported in 2002 that business travel—usually the highest yield traffic for airlines—is often contracted out to travel agencies to manage. As a result, airlines report that traditional travel agencies (and therefore GDSs) will continue to play a vital role in the distribution of airline tickets. On the other hand, an increasing percentage of leisure travel is now booked via the Internet. Bookings continue to be predominantly processed by GDSs, but since the late 1990s the percentage of on-line booking processed through airline internal reservation systems and Orbitz Supplier Link technology has increased. However, the sales through traditional travel agents continue to account for the majority of airline revenue, in large part because higher- priced business travel continues to be managed through traditional travel agencies. Figure 7 illustrates how the number of major U.S. airlines bookings processed through GDSs and GDS bypasses has changed from 1999 to 2002. Travel agent reimbursement patterns have shifted significantly since the late 1990s. Much of the shift was caused by the airlines, which by 1998 reduced or ultimately ended the traditional practice of offering a flat published “base” commission (traditionally a percentage of each ticket price, which later was a flat fee for each ticket) to all travel agents as a means of reducing distribution costs. Partly the CRS rules do not govern airlines’ relationships with travel agencies, airlines were free to change their payments to travel agents in a way they were not free to do with GDSs, and now use a system of privately negotiated commission arrangements with individual travel agencies. Not all travel agencies are able to negotiate such individual commission arrangements, and the terms of such agreements vary among travel agencies and among airlines. From 1999 to 2002, average annual payments by airlines to travel agencies decreased by 57 percent, from $370 million to $159 million, as airlines provided override commissions predominantly to those travel agencies with high ticket sales. Figure 8 illustrates the decline in average commission payments by airlines to travel agencies in relation to total distribution costs. From 1999 to 2002, on average, major airlines reduced their total distribution costs by 25.8 percent, from $732.9 million to $543.6 million, or 43.6 percent on a per booking basis. Most of that reduction occurred in the payments by airlines to travel agencies, which decreased by 57 percent, from $370 million to $159 million. Despite a decrease of 8.5 percent in passenger traffic between 2000 and 2002, remaining distribution costs--which include rising GDS fees, as well as overhead, personnel, advertising, and credit card fees--were essentially unchanged over the period. The largest travel agencies—those with total annual revenues in excess of $50 million—represent less than 1 percent of travel agencies, but book almost 60 percent of total travel agent sales. By definition, because of their large volumes of sales, these large travel agencies are most likely to receive the majority of the airlines’ override commissions. As airlines cut traditional travel agent ticket commissions, GDSs began increasing incentive payments to travel agencies. According to an official of a domestic GDS, since airlines (and, subsequently, other travel suppliers) reduced travel agent commissions, travel agencies sought out replacement sources of revenue, and GDSs responded with incentive payment increases. Large travel agencies were able to use their position in the industry between the GDSs and large segments of the traveling public to convince the GDSs to provide some form of incentive payment. At the same time, GDSs use incentive payments to compete for travel agent market share and to incentivize travel agents to book on their particular GDS. Generally, as with airlines’ override commissions, a GDS pays incentives to those travel agencies with high booking volumes, as each booking results in the GDS receiving a fee from the airline. Between 1995 and 2002, on average, each GDS paid travel agencies an increasing amount of incentive payments, from $22.3 million to $233.4 million (over 900 percent). Figure 9 illustrates the average change in each GDS’s payments to U.S. travel agents since 1995. Shifts in travel agent payments have also occurred between travel agents and consumers. After airlines ended automatic base commissions, many travel agencies began to charge consumers service fees for booking tickets—previously included in the ticket price in the form of a commission that was invisible to the consumer. Figure 10 illustrates the current flow of payments among the four participants in the airline ticket distribution industry. Compared to figure 3, it illustrates some changes that have taken place in the airline ticket distribution industry since the late 1990s— particularly the advent of various Internet booking methods, airline- initiated sites that bypass GDSs, the new flow of payments to travel agencies, and new service fees imposed on consumers. While each change—increased use of the Internet to process and sell tickets and reductions in airline payments to travel agencies—has contributed to the lowering of overall airline distribution costs, neither has reduced the effective requirement that nearly every major airline participate in and pay booking fees to each GDS. As previously stated, airlines continue to process over 60 percent of their tickets—mostly high yield business traffic—through the GDSs. Furthermore, airlines continue to need to subscribe to each GDS in order to reach all consumers. As DOJ described it in comments submitted to DOT during a 1997 review of the CRS rules, from an airline’s perspective, because each CRS provides access to a large, discrete group of travel agencies, each CRS constitutes a separate market. And unless the airline is willing to forego access to those travel agencies and the consumers they serve, it must participate in every CRS. Large travel agencies and consumers who use the Internet appear to have benefited most from recent changes in the airline ticket distribution industry. Small travel agencies and the consumers who patronize them appear to have benefited least, if not been disadvantaged. Since the late 1990s, the number of very large travel agencies (i.e., those with total annual sales in excess of $50 million) has stayed approximately the same, but their total annual air travel sales have almost doubled. Because the largest travel agencies sell more air travel than any other category of travel agency, by definition they would likely qualify for both GDS incentive payments and airline override commissions. During this same period, the number of small travel agencies has steadily declined, as have their total annual air sales. Figure 11 illustrates changes in the number of different sized travel agencies and their sales of air travel over time. The increase in on-line bookings appears to have had a more negative effect on smaller travel agencies than on large travel agencies because of general differences in the nature of their clientele. Leisure travelers increasingly book on line—usually well in advance with simple itineraries. According to the DOJ, leisure travelers with relatively simple itineraries are best suited to using the Internet. On-line travel agencies sell most tickets to price-sensitive leisure passengers. In contrast, business consumers, who often use large travel agencies, are not likely to book on line because of restrictive corporate policies and complex business itineraries that are often subject to short notice changes. Those travel agencies also may provide reporting and record keeping services for large business customers. According to officials from the NCECIC and the American Society of Travel Agents, small travel agencies are confronting financial pressure from both airlines and GDSs. First, small travel agencies may have difficulty securing airline override commissions or GDS incentive payments because of sales volume requirements. In addition, small travel agencies often must pay for GDS service and equipment, while these fees are frequently waived for agencies with high sales volumes. To survive, many smaller travel agencies have become focused on niche travel markets–for example, regional travel, hiking/biking travel, and cruise line travel–and charge service fees to clients. The availability of Internet distribution methods appears to have positively affected Internet users. These methods provide fare and schedule information to consumers, and provide consumers with a number of Websites on which they can compare fare and schedule options. Moreover, consumers who use the Internet have access to less expensive webfares offered by the airlines. Airlines use such fares to encourage consumers to use Internet travel sites, as they are less expensive to the airlines. For instance, the results of a 2001 Forrester Research survey of Internet users, which the NCECIC included in their 2002 report to Congress and the President, found that people who booked on line preferred doing so because they can readily compare various on-line travel sites, as well as access more diverse fares (i.e., webfares) than they can through a traditional travel agent. Furthermore, on-line customers may also avoid the higher ticketing fee that some travel agencies now charge (up to $50), although many on-line travel agencies may charge their own smaller ticketing fees ($5-$10). Finally, the public perceives that booking on line is less expensive than booking through a traditional travel agent. Conversely, consumers purchasing tickets on airline Websites may not have complete and unbiased information when booking flights, which is important in a competitive industry. For example, Orbitz.com does not include schedule and fare information for certain low fare airlines, such as Southwest and JetBlue because these airlines have chosen not to participate. Travelers who choose not to buy airline tickets on line, or who do not have Internet access, may be at a relative price disadvantage. Travelers using a traditional travel agent may pay a service charge of up to $50. In addition, travelers who do not choose to use the now standard “electronic ticket” may be charged an extra fee by the airline for a paper ticket. And as noted before, a travel agent may not have access to special webfares. But travelers who do use traditional travel agents may benefit from the added flexibility of being able to change their reservation. An on-line travel agency booking is often difficult to change, especially if it is a low fare that is nonrefundable or subject to other restrictions. On the other hand, with the power to change a booking through the GDS, travel agents say they act as the consumer’s advocate with an airline, with consumers benefiting from the detailed knowledge and personal interaction that a travel agent can provide. Business travelers are continuing to use traditional travel agencies to manage their travel because of corporate travel policies, including negotiated “private fares.” According to the National Business Travel Association, less than 10 percent of corporate travel is booked through the Internet and many corporations forbid their employees from booking travel on the Internet, even if employees find a lower fare through that distribution method. Corporate travel policies can limit the employees’ ability to use the Internet in booking travel because they often require employees to use a contracted travel agency, through which they are booked on corporate contract carriers. Because we lacked access to proprietary company data on costs and revenues, we could not develop the sort of evidence that would allow us to determine whether GDSs exert market power in the airline ticket distribution industry. Booking fees charged by GDSs to airlines have risen over the past several years. From 1996 to 2001, the typical booking fee paid by a major airline has increased by 30.9 percent, from $3.27 in 1996 to $4.28 in 2001, a change greater than the overall inflation rate (as measured by the Gross Domestic Product chain-type price index) of 9.4 percent during this same time period. According to GDS officials, during this time period, the services and products offered by GDSs were enhanced and deliver substantial benefits to airlines (e.g., e-ticketing). Furthermore, one GDS official estimates that about 40 percent of its self-reported software development costs are meeting supplier (e.g., airlines) needs. Because much financial information is proprietary, we were therefore unable to obtain a full breakdown of GDSs’ costs in order to isolate the specific costs directly associated with the booking function (“transaction costs”). However, two GDS-reported costs associated with the booking function for which we were able to get data both rose between 1996 and 2002: GDS computing costs (i.e., total data center operating costs) and travel agent incentive payments. Computing costs have increased but because of inconsistent data reported by the GDSs, we were unable to determine the precise increase. However, the GDS computing cost increase is in contrast to general industry computing cost trends, which decreased by over 60 percent since the mid-1990s. According to officials with the GDSs, their computing costs per booking rose relative to commercial sector computing costs because (1) bookings have become more complex, requiring more processing to complete and (2) the volume of transactions shopping for low fares that do not result in a booking has risen, especially for on-line travel agencies used by consumers. They stated that the additional processing required offset any general decrease in computing costs. For example, airlines have offered more types of fares to consumers (e.g., “private fares” available to large corporate clients, government fares, and conference specials). Many of these fares are stated as a percentage of the full coach fare, which airlines can change several times daily. GDSs must quickly match the correct fare with each customer for each specific flight. Moreover, GDS officials also stated that airlines are keeping more detailed Passenger Name Records with all reservations. The amounts of data that the GDSs track with these records have also increased over time, as airlines have made efforts to better serve passengers (e.g., frequent flyer accounts and seating preferences). It is unclear how much of this increasing GDS functionality, the costs of which are presumably passed on to the airlines through increases in booking fees, adds value for the airlines. Some airlines have complained that they do not need certain elements of the increased functionality (e.g., seat maps) and are paying for something they do not want at a time when they are struggling financially. As discussed above, GDSs’ incentive payments to travel agencies have increased. GDSs provide incentive payments to travel agencies to reward them for using their system. The largest travel agencies were able to use their position in the industry between the GDSs and large segments of the traveling public to convince the GDSs to provide increased incentive payments. On average, incentive payments from GDSs to travel agencies increased by over 500 percent from 1996 to 2002, rising from $34.9 million to $233.4 million. Computing costs and travel agent incentive payments do not encompass all airline ticket booking-related costs, and we were unable to get financial data on other costs (e.g., booking-related hardware costs) related to GDSs’ airline ticket booking function, which might have allowed us to determine a relationship between booking fees and related costs and to consider what the relationship indicated about the presence and possible exercise of market power by the GDSs. To identify other information about the possible existence and use of market power, we reviewed the comments submitted to DOT since its November 2002 Notice of Proposed Rulemaking of the CRS rules. GDSs stated that they do not have market power. However, some airlines contend that they do operate under GDS market power. For example, America West contends that each CRS exercises monopoly power over it. In its June 9, 2003, comments to DOT, DOJ concluded based on its market structure analysis that despite the recent growth of Internet distribution, GDSs continue to have market power over airlines. DOJ found no evidence that existing regulations designed to erode that power had succeeded in the past or are likely to improve the situation in the future. Rather, they concluded that many of the existing regulations have been ineffective in reducing GDS market power, which derives from the inability of most airlines to withdraw from any GDS. DOJ noted that while the CRS rules have been effective in eliminating discriminatory pricing (charging different fees to target specific airline competitors), it has not prevented GDSs from charging fees above competitive levels. Nevertheless, DOJ concluded that recent changes in the industry have eliminated the need or utility for most of the CRS rules and that anticompetitive practices be enforced through case-by-case antitrust investigations. A competitive airline ticket distribution industry, which includes the airline, GDSs, and travel agent industries, continues to be important because noncompetitive practices may adversely affect airlines and consumers. Originally, the CRS rules were focused on reducing the market power of airline-owned CRSs to prevent owner airlines from using the CRSs to gain a competitive advantage over nonowner airlines. With the GDSs now independent from the airlines, questions have been raised regarding the GDSs’ exercise of market power over all airlines. Among other things, because GDSs do not compete with each other for airline business, airlines and consumers may be subject to prices that are higher than in more competitive markets. While our limited ability to get complete booking cost and fee data from the GDSs did not allow us to independently evaluate whether GDSs currently exercise market power, the market position of large travel agencies or the overall performance of the industry, evidence that we developed in this review provides suggestions of both a functioning market and competitive flaws. On the one hand, our review provides some indications of a market that is functioning and adaptive. For example, the use of the Internet has grown significantly, and overall prices for airlines for each form of distribution have fallen. In addition, the development and evolution of Orbitz and expansion of direct airline Internet booking reflects that at least some lower-cost substitutes for GDSs have emerged. Airlines and other participants in the ticket distribution system have developed an ability to use Internet innovations to limit distribution expenses. Similarly, the Internet’s ability to provide consumers with access to a wide variety of, often low cost fares (i.e., transparency) has arguably benefited them. On the other hand, our review also highlights issues that suggest the continued possibility of GDS market power as well as the growing power of large travel agencies. The structure of the industry, in which airlines are dependent upon the GDSs to obtain ultimate access to large portions of travel agents and potential passengers (especially high yield business traffic), perpetuates the potential for the existence and exercise of market power by GDSs. Although Orbitz may offer a technological substitute that mitigates the market power of GDSs for some airlines, Orbitz’ relationship with major airlines has raised different concerns about the potential for owner airlines once again using their ownership position to distort airline competition. Our review also indicates that the largest travel agencies, upon whom both airlines and GDSs depend to reach a large percentage of the higher-paying business travelers, currently have considerable leverage in the industry. This leverage is reflected by their ability to obtain rising incentive payments from GDSs as well as commission and override payments from airlines. The innovation that has occurred in the airline ticket distribution industry-- particularly the growth of the Internet—is noteworthy. These innovations occurred under the framework of federal regulations, which DOT is currently reviewing. DOJ stated that some of these rules have failed to accomplish their goals and therefore need to be removed. At the same time, DOJ’s antitrust review of Orbitz continues. Thus, the federal interaction with the industry continues on both an industry-wide and case-by-case basis. At the same time, it will be important to continue monitoring how developments in the industry affect competition and consumers. We provided a draft of this report to DOT for review and comment. DOT provided us with technical comments, which we incorporated where applicable. We also provided relevant sections of this report to DOJ, the three major U.S. GDSs, Orbitz, and most major U.S. airlines for review. These organizations provided technical corrections, which we incorporated as appropriate. We will send copies of this report to the Honorable Norman Mineta, Secretary, Department of Transportation. We will make copies available to others on request. In addition, the report will be available at no charge on our Website at www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-2834. I can also be reached at HeckerJ@gao.gov, or Steve Martin at MartinS@gao.gov. Appendix III lists key contacts and key contributors to this report. This report examines three questions: What have been major changes in the airline ticket distribution industry since the late 1990s, and how did these changes affect airlines? How have these changes in the airline ticket distribution industry affected travel agents and consumers? What does the relationship between global distribution system’s booking fees and booking-related costs suggest about the presence and use of market power? We limited the scope of this review to the three global distribution systems (GDS) that handle over 90 percent of U.S. airline bookings. These three GDSs are Galileo, Sabre, and Worldspan. We excluded other GDSs that operate predominantly in other countries. Those excluded from this review include Abacus, Amadeus, Axess, Infini, and Topas. In addition, we did not have access to the individual contracts between the various industry entities; and therefore, the descriptions of the relationships are generalizations. To determine how the airline ticket distribution industry has changed and the effects on airlines since the late 1990s, we analyzed industry booking trend and cost data (e.g., airline and GDS payments, annual airline expenditures per distribution method). These data are proprietary, so we agreed to aggregate them so that no private company materials or information would be publicly disclosed in an identifiable form. Consequently, all data are reported in averages. Furthermore, since these data are proprietary, we were unable to independently verify them because we have no authority to require access to the underlying data. However, we applied logical tests to the data and found no obvious errors of completion or accuracy. Along with our use of corroborating evidence, we believe that the data were sufficiently reliable for our use. In addition, we examined documents from the Department of Transportation (DOT). We interviewed DOT officials, Department of Justice (DOJ) officials, industry experts, the three domestically based GDSs, seven major airlines, and four travel agencies (e.g., a small traditional travel agency, and the three leading on-line travel sites—Travelocity, Expedia, and Orbitz). We attempted to interview all of the major travel agencies, but the top three would not agree to meet with us. In addition, we were unable to obtain any airline or GDS cost data related specifically to those travel agencies. To describe how changes in the airline ticket distribution industry have affected travel agents and consumers, we analyzed travel agent data (e.g., sales and revenues). We obtained these data from the National Commission to Ensure Consumer Information and Choice (NCECIC), a commission authorized under Section 228 of the Wendell H. Ford Aviation Investment and Reform Act for the 21st Century (P.L. 106-181, AIR-21) to study two distinct issues—first, the current state of the travel industry, and the impact of changes in the industry on consumers; and second, the potential for impediments to distribution of information to cause injury to agencies and consumers. We contacted the Airline Reporting Corporation (ARC), the source of the NCECIC travel agent data, to clarify the nature of the data and thus we decided the data were reliable for our purposes. Lastly, we interviewed travel agents, industry group representatives, and officials from the NCECIC. To determine the relationship between GDSs booking fees and booking- related costs and what it may suggest about the presence and use of market power, we analyzed GDS booking fee and cost data (e.g., computing costs and travel agent incentives). We obtained these data from the three U.S. GDSs. Since these data are proprietary, we agreed to aggregate them so that no private company materials or information would be publicly disclosed by us in an identifiable form. Consequently, all data are reported in averages. Furthermore, since these data are proprietary, we were unable to independently verify them because we have no authority to require access to the data. However, we applied logical tests to the data and found no obvious errors of completion or accuracy. We believe that the data are sufficiently reliable for our use. We analyzed specific booking fee-related costs that were available to us—computing costs and travel agent incentive payments. Computing costs are based on data center operations costs, including hardware, software, leases, and personnel costs. We compared trends in these computing costs with industry computing cost trends using mainframe data center costs from the Gartner Group, a well-known research and advisory firm that helps its clients understand technology and drive business growth. We were limited in our review because we did not have full access to proprietary data. One of the GDSs (Worldspan) is privately held and does not file financial data with the U.S. Securities and Exchange Commission (SEC). Although Sabre and Galileo are publicly held and file financial data with the SEC, they are not required to disaggregate cost data. Moreover, it is difficult to compare even the data that Sabre and Galileo did provide, since they may report their costs differently, as the Generally Accepted Accounting Principles allow companies to allocate costs in various ways. Therefore, we were not able to obtain complete and detailed data from the GDSs on all costs directly related to booking transactions. However, we did review the comments that were submitted to DOT regarding its review of the CRS rules. Prominent among those were the June 9, 2003, DOJ comments, which were based on DOJ’s expert, market structure analysis. We also discussed with DOJ the comments they submitted. In addition, we sought cost and booking data that dated from 1978 to the present. However, no airline was able to provide data for a time earlier than 1996. Consequently, we limited our review to the 4 years covering the period 1999 to 2002. We conducted our review between September 2002 and July 2003 in accordance with generally accepted government auditing standards. According to the Gartner Group, overall mainframe data center costs continued to decrease every year from 1994 through 1998. The Gartner Group found that on a per-millions-of-instructions-per-second (MIPS) basis (a common measure of usage), data center costs have decreased during the same time period. Our analysis of the global distribution systems (GDS) per MIPS computing cost (cost per MIPS) suggests that GDS per MIPS costs also decreased from 1995 through 2002. Thus, on a per MIPS basis, the general trend of computing costs incurred by the GDSs seem to be consistent with the industry trend reported by Gartner Group for the years 1994 through 1998. For technology-based companies like GDSs, an important cost measure is the computing cost per booking. This measure is significant because GDSs generate revenue largely based on the volume of booking transactions processed. On an annual basis, we found that the computing cost per booking increased slightly over the years 1996 and 2001, the years for which we had relevant data from most of the GDSs. According to the GDSs, the per-booking computing cost has risen because each booking has become more complex over time, requiring more processing—more MIPS—to complete a booking, thereby more than offsetting any decrease in per MIPS computing costs. One way to explain the increasing complexity of bookings is through the number of messages that are required to complete a booking. A message is typically a single command typed by a travel agent in a GDS reservation system. A message is sent every time a travel agent types a command and hits the Enter key on the keyboard. For example, for one GDS, the number of instructions needed to process each message increased by 58 percent from 1999 to 2002. For that GDS, the average number of messages required for each booking increased by 118.6 percent from 1993 to 2002. In addition, a message can be very simple (e.g., what gate is flight 442 scheduled to arrive at in Dallas today) or very complex (e.g., what is the cheapest itinerary available to fly roundtrip between Los Angeles International Airport and any of New York City’s three major airports, departing next Tuesday morning). In addition to those individuals named above, Naba Barkakati, Triana Bash, Carmen Donohue, Brandon Haller, David Hooper, Joseph Kile, Sara Moessbauer, and Alwynne Wilbur made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
In 2002, when major U.S. airlines posted net operating losses of almost $10 billion, they paid over $7 billion to distribute tickets to consumers. Of these total distribution expenses, airlines paid hundreds of millions of dollars in booking fees to global distribution systems--the companies who package airline flight schedule and fare information so that travel agents can query it to "book" (i.e., reserve and purchase) flights for consumers. Each time a consumer purchases an airline ticket through a travel agent, the global distribution system used by the travel agent charges the airline a set booking fee. Concerns have been raised that the global distribution systems may exercise market power over the airlines because most carriers are still largely dependent on each of the global distribution systems for distributing tickets to different travel agents and consumers and therefore must subscribe and pay fees to each. Market power would allow global distribution systems to charge high, noncompetitive fees to airlines, costs that may be passed on to consumers. GAO was asked to examine changes in the airline ticket distribution industry since the late 1990s and the effects on airlines, the impact of these changes on travel agents and consumers, and what the relationship between global distribution systems' booking fees and related costs suggest about the use of market power. Since the mid-1990s, two major changes occurred in the airline ticket distribution industry, and these have produced cost savings for some major U.S. airlines. First, airlines developed less expensive Internet ticketing sites that bypass global distribution systems and their fees and encouraged passengers to book via Internet sites. Between 1999 and 2002, on average, the percentage of tickets booked on-line, including airline-owned Websites and on-line travel agencies, grew from 7 percent to 30 percent. Second, in a related effort to trim costs, airlines cut the commissions they traditionally paid to travel agencies. However, these changes have not eliminated airline dependence on global distribution systems. These changes have had mixed effects on travel agents and consumers. Very large travel agencies (those with more than $50 million in annual air travel sales revenue) appear to have benefited from volume-based incentive payments from airlines and global distribution systems, while smaller travel agencies have closed or lost business, especially to on-line travel Websites. Consumers who use the Internet have benefited from lower internet-only fares. Travelers who do not buy airline tickets on line may be at a disadvantage in not having access to these fares. Because we lacked access to proprietary company information, we could not determine the precise relationship between global distribution system booking fees and related costs, and thus could reach no conclusions about potential exercise of market power by global distribution systems in the airline ticket distribution industry. Since 1996, booking fees and some costs related to the booking function--computing costs and travel agent incentive payments--both increased. However, we could not obtain data on all expenses related to the booking function, and thus could not accurately compare these costs to booking fees. DOT provided us with technical comments, which we incorporated as appropriate.
Consumers may obtain health insurance from a variety of public and private sources, which can help protect them from the costs associated with obtaining medical care. Health insurance typically includes costs to consumers, which may vary for a number of factors, including scope of coverage, cost-sharing provisions, and federal or state requirements. Recent federal laws —specifically, PPACA and the Children’s Health Insurance Program Reauthorization Act of 2009 (CHIPRA)—further define coverage and cost parameters for certain health insurance plans available to consumers now and in 2014, when exchanges are required to be operational, and include provisions to increase children’s access to coverage. Unlike states that opt to include coverage for eligible children under a CHIP-funded expansion of Medicaid, and therefore, must extend Medicaid covered services to CHIP-eligible individuals, states with separate CHIP programs have flexibility in program design and are at liberty to modify certain aspects of their programs, such as coverage and cost-sharing requirements. For example, federal laws and regulations allow states with separate CHIP programs to offer one of four types of health benefit coverage and, regardless of the benefit coverage option states choose, require states’ separate CHIP programs to include coverage for routine check-ups, immunizations, and emergency services. States typically cover a broad array of services in their separate CHIP programs and, in some states, adopt the Medicaid requirement to cover Early and Periodic Screening, Diagnostic and Treatment (EPSDT) services. Effective October 1, 2009, CHIPRA required CHIP plans to cover dental services defined as “necessary to prevent disease and promote oral health, restore oral structures to health and function, and treat emergency conditions.” CHIPRA also required states to comply with mental health parity requirements—meaning they must apply any financial requirements or limits on mental health or substance abuse benefits under their separate CHIP plans in the same manner as applied to medical and surgical benefits. States covering EPSDT services under separate CHIP plans were deemed to comply with these requirements. With respect to costs to consumers, CHIP premiums and cost-sharing may not exceed minimum amounts as defined by law. States may vary CHIP premiums and cost-sharing based on income and family size, as long as cost-sharing for higher-income children is not lower than for lower-income children. Federal laws and regulations also impose additional limits on premiums and cost-sharing for children in families with incomes at or below 150 percent of the federal poverty level (FPL). For example, the range of copayments was $1.15 to $5.70 per service in 2009 for children in families with incomes between 100 and 150 percent of FPL. In all cases, no cost-sharing can be required for preventive services—defined as well-baby and well-child care, including age- appropriate immunizations and pregnancy-related services. In addition, states may not impose premiums and cost-sharing, in the aggregate, that exceed 5 percent of a family’s total income for the length of the child’s eligibility period in CHIP. Children’s access to affordable health insurance and health care can be affected by many different factors, and CHIPRA and PPACA also contain provisions to facilitate eligible children’s access to CHIP. For example, CHIPRA appropriated funding for state and other organization outreach grants to help increase enrollment of CHIP-eligible children for federal fiscal years 2009 through 2013 and performance bonuses for simplifying CHIP enrollment and retention by applying certain program reforms. PPACA provisions that aim to facilitate eligible children’s access to CHIP include appropriating additional funding for CHIPRA outreach grants through federal fiscal year 2015. PPACA also requires states to maintain CHIP eligibility standards for children through September 2019. In accordance with this requirement, states are prohibited from increasing existing premiums or imposing new premiums except in limited circumstances. PPACA requires the establishment of exchanges in all states by January 1, 2014, to allow consumers to compare health insurance options available in that state and enroll in coverage. The exchanges will offer QHPs that are certified and are offered by participating issuers of coverage. PPACA further requires QHPs offered through an exchange to comply with applicable private insurance market reforms, including relevant premium rating requirements, the elimination of lifetime and annual dollar limits on essential health benefits, prohibition of cost-sharing for preventive services, mental health parity requirements, and the offering of comprehensive coverage. With respect to comprehensive coverage, PPACA requires QHPs offered through an exchange to cover 10 categories of EHBs, limit cost-sharing associated with this coverage, and provide one of four levels of coverage determined by the plan’s actuarial value. By the end of December 2012, states had either selected a base- benchmark plan or been assigned the default base-benchmark plan by HHS. In over 80 percent of states, the largest plan by enrollment in the largest product by enrollment in the state’s small group market was established as the base-benchmark plan. In addition, in states where the base-benchmark plan did not include coverage for pediatric dental or vision services, the state (or HHS, in the case of a federally established default benchmark plan) was required to supplement coverage with the addition of the entire category of pediatric dental or vision benefits from either (i) the Federal Employees Dental and Vision Insurance Program (FEDVIP) dental or vision plan with the largest national enrollment of federal employees, or (ii) the benefits available under the plan in the state’s separate CHIP program with the highest enrollment, if a separate CHIP program existed. PPACA also allows exchanges in each state the option of providing pediatric dental services using a stand-alone dental plan (SADP). In exchanges with at least one participating SADP, QHPs will have the option of excluding pediatric dental benefits from their covered services. In our five selected states, CHIP and benchmark plans generally covered the services we reviewed and were similar in terms of the services on which they imposed day, visit, or dollar limits. CHIP officials in our selected states expected minimal or no changes to CHIP coverage in 2014, and that the QHPs offered through the exchanges would reflect states’ benchmark plans and PPACA requirements. We determined that the CHIP and benchmark plans in our five selected states were comparable in that they included some level of coverage for nearly all the services we reviewed. Exceptions were hearing-related services, such as tests or hearing aids, where both were not covered by the benchmark plan in Kansas, and outpatient therapies for habilitation, which were not covered by CHIP plans in Kansas and Utah or by the (See app. II for a benchmark plans in Colorado, Kansas, or New York.detailed list of selected services covered by each state.) The benchmark plan coverage for pediatric dental and vision services was often the same as that in the CHIP plan because the base-benchmark plan, which was typically based on the largest plan by enrollment from each state’s small group market, did not cover these services, and the states often selected CHIP as the supplementary coverage model. In particular, the base- benchmark plan in four states did not cover pediatric dental services and in three states did not cover pediatric vision services. Because pediatric dental and vision services are EHBs, these states were required to select supplemental benchmark plans to bridge the coverage gaps, and often selected CHIP as the supplement. National data from HHS suggests that nearly all states supplemented the base-benchmark plan with pediatric dental and vision plans. According to HHS, 50 and 46 states had to identify supplemental pediatric dental and vision plans, respectively, and more than half of the states selected the FEDVIP plan as the supplement for each service. The CHIP and benchmark plans we reviewed were also generally similar in terms of the services on which they imposed day, visit, or dollar limits. For example, the plans we reviewed were similar in that they typically did not impose any such limits on ambulatory patient services, emergency care, preventive care, or prescription drugs, but commonly did impose limits on outpatient therapies and pediatric dental, vision, and hearing services. One notable difference between CHIP and benchmark plans we reviewed was the frequency by which they limited home- and community-based health care services. While the benchmark plans in four states imposed day or visit limits on these services, only one state’s CHIP plan did so. (See fig. 1.) For services where both plan types imposed limits, our review of plan Evidences of Coverage found that, except for dental and vision services, the comparability between plan types in terms of annual limits was less clear, but at times was more generous for CHIP. For example, Utah’s benchmark plan limited home- and community-based health care services to 30 visits per year while the state’s CHIP plan did not impose any limits on this service. Comparability between annual service limits in states’ CHIP and benchmark plans was less clear for outpatient therapy services. For example, the Colorado CHIP plan limited outpatient therapy to 40 visits per diagnosis compared to 20 visits per therapy type in the benchmark plan. Similarly, the New York CHIP plan allowed a maximum of six weeks for physical therapy while the benchmark plan allowed up to 60 visits per condition per lifetime. Limits on dental and vision services were largely comparable, due to the selection of CHIP as the supplemental benchmark for those services in most of the selected states. Table 1 provides examples of annual limits for select services between CHIP and benchmark plans, and app. III lists annual limits for all services we reviewed. CHIP officials in all five states said that they expect the services we reviewed that were covered by their respective CHIP plans and any relevant limits on these services to remain largely unchanged in 2014. With respect to QHP coverage, state officials in all five states expect 2014 coverage to reflect PPACA and its implementing requirements, including being comparable to their respective benchmark plans. For example, QHPs must offer EHB services at levels that are substantially equal to their respective state’s benchmark plans. With state approval, QHPs may substitute services that are actuarially equivalent and from the same EHB category as the service being replaced. The actuarial equivalence requirement also applies to dental benefits provided by SADPs, which are expected to be available in all five selected states, according to state officials.Illinois, and Kansas—commented on the advantages and disadvantages of SADPs. While their availability could benefit consumers in terms of a broader set of options for dental services, their availability could also create confusion among consumers. For example, because QHPs are not required to include pediatric dental coverage in their plans if an SADP is available in their state’s exchange, some officials expressed concern that a consumer who needs the pediatric dental benefit may mistakenly purchase a plan in the exchange without such coverage or, conversely, could have duplicate coverage if they purchased an SADP in addition to a QHP that may include pediatric dental coverage. Exchange officials in three of the selected states—Colorado, State officials said that they also expect QHPs to reflect additional PPACA requirements. For example, PPACA requires QHPs to include coverage for the categories of rehabilitative and habilitative services and devices. For benchmark plans that do not cover habilitative services, HHS’s implementing regulations provide three options to comply with the requirement. States can opt to (1) require QHPs to cover habilitative services in parity with rehabilitative services; (2) select specific services that would qualify as habilitative or, if the state neglects to choose either of these choices, (3) allow the QHP issuer to determine which services qualify as habilitative. Each of the three selected states that did not cover outpatient therapies for habilitation—Colorado, Kansas, and New York— has opted to require QHPs to cover these services in parity with rehabilitative services. According to HHS, nationwide data show that in addition to these three states, 19 other states had benchmark plans that did not cover habilitation, and the majority chose to allow the issuers to determine which services would qualify as habilitative. PPACA also eliminates the use of annual and lifetime dollar limits on any EHB services. The elimination of lifetime dollar limits was effective in September 2010 and the elimination of annual limits takes effect in January 2014. Among our five selected states, four states had benchmark plans that imposed an annual dollar limit on at least one of the service categories we reviewed; with limited exception, none of these dollar limits were imposed on EHB services. For example, Kansas’ benchmark plan limited hospice services to $5000 per insured person per lifetime. In general, state officials indicated that for these services, they expected that QHP issuers would eliminate the dollar limits. PPACA also extends the mental health parity requirements, which require that any lifetime limits placed on mental health or substance abuse services be the same as those placed on physical health care services.The benchmark plans in two selected states—New York and Utah— included such limits on mental health and substance abuse services. For example, both states’ benchmark plans limited inpatient mental health service to 30 days a year, where similar limits did not exist for inpatient physical health services. Officials in both states said that they expected that QHP issuers would eliminate such limits. In our five selected states, consumers’ costs were almost always less in CHIP plans when compared to the states’ benchmark plans. While CHIP officials said that they expect CHIP costs to consumers to remain largely unchanged in 2014, the cost of QHPs to consumers is less certain, since benchmarks are not models for QHP cost-sharing. Instead, PPACA includes provisions that will standardize QHP costs and reduce cost- sharing for certain individuals. Based on the review of plan Evidences of Coverage in our five selected states, costs to consumers were almost always less in the CHIP plans than in the states’ benchmark plans. For example, the CHIP plans in four of the five selected states did not include any deductibles, which means that enrollees in those states did not need to pay a specified amount before the plan began paying for services. Utah is the only selected state that imposed a deductible on a portion of its CHIP population, which applied to about 60 percent of its CHIP enrollees—those with higher incomes. In contrast, benchmark plans in all five selected states had deductibles, which ranged from $500 in Illinois and Kansas to $3,000 in Utah for an individual, and $1,000 in Kansas to $6,000 in Utah for a family. Our review of plan Evidences of Coverage and information from state and plan officials also found that, for services we reviewed where the plan imposed copayments or coinsurance, the amount was almost always less in a state’s CHIP plan that in its benchmark plan. For example, the CHIP plan in two of our five states—Kansas and New York—did not impose copayments or coinsurance on any of the services we reviewed. In two of the remaining three states, the CHIP plan imposed copayments or coinsurance on less than half of the services we reviewed, and the amounts were usually minimal and based on a sliding income scale. For example, for each brand-name prescription drug, the Illinois CHIP plan imposed a $4 copayment on enrollees with incomes between 134 and 150 percent of the FPL, which was increased to $7 for enrollees with incomes between 201 and 300 percent of the FPL. Utah’s CHIP plan differed from the other states’ plans in that it imposed either a copayment or coinsurance on all services we reviewed—except preventive and routine dental services—which varied by income level. In contrast, the benchmark plans in all five states imposed copayments or coinsurance on most services we reviewed. Further, the amounts did not vary by income level and were consistently higher than the CHIP plan in their respective state. These cost differences were particularly pronounced for certain services we reviewed, such as primary care and specialty physician office visits, prescription drugs, and outpatient therapies. For example, depending on income, the copayment for primary care and specialist physician visits ranged from $2 to $10 per visit for Colorado CHIP enrollees, but was $30 and $50 per visit, respectively, for benchmark plan enrollees in the state. In states where the benchmark plan charged coinsurance and the CHIP plan required a copayment, a direct comparison of cost differences could not be made, although data suggest CHIP costs would generally be lower in most cases. For example, while higher-income CHIP enrollees in Illinois paid $100 per admission for an inpatient hospital stay, state benchmark enrollees were responsible for 10 percent coinsurance after the deductible was met, an amount that was likely to be higher than the $100 given that 10 percent of the average price for an inpatient facility stay in 2011 was over $1,500. Table 2 provides examples of differences in copayments and coinsurance for select services between CHIP and benchmark plans. Our review of CHIP premiums and other sources of premium data suggest that CHIP premiums were also likely lower than benchmark plans. For example, 2013 CHIP annual premiums for an individual varied by income level and ranged from $0 for enrollees under 150, 160, and 100 percent of the FPL in Illinois, New York, and Utah, respectively, to $720 for higher-income enrollees between 351 and 400 percent of the FPL in New York, with most enrollees across the five selected states paying less than $200 a year. Benchmark plan premium data were not readily available at the time of our study; however, national survey data from America’s Health Insurance Plans suggest that individuals under 18 years of age enrolled in the private individual market paid annual premiums that averaged $1,350 in 2009. In addition, both CHIP and benchmark plans in all five states limited the total potential costs to consumers by imposing out-of-pocket maximum costs. For example, all five states applied the limit a family could pay in CHIP plans as established under federal law—including deductibles, copayments, coinsurance, and premiums—at 5 percent of a family’s income during the child’s (or children’s) eligibility for CHIP.maximum applies to all services, irrespective of the number of children in the family enrolled. For benchmark plans, out-of-pocket maximum costs were established by each plan. For the five benchmark plans we This reviewed, the annual out-of-pocket maximum costs ranged from $1,000 to $6,050 for an individual and $3,000 to $12,100 for a family. Additionally, the benchmark plans differed from the CHIP plans in that their maximum costs did not include premiums and may not have included deductibles or costs associated with all services. For example, three of the five benchmark plans had deductibles in addition to the out-of-pocket maximum costs. Additionally, copayments for office visits did not apply to the out-of-pocket maximum costs in four of the five states’ benchmark plans. Some evidence suggests that most families in the five selected states and nationally—whether enrolled in CHIP or a benchmark plan—were unlikely to incur costs that reached the out-of-pocket maximum costs. Our interviews with CHIP officials in selected states and information in the states’ CHIP annual reports indicated that it was rare for families to exceed their 5 percent maximum costs. Utah was the only state that said they had more than a few families exceeding the maximum costs, with about 140 families reporting doing so in a given year, according to state officials. Similarly, existing national data on average out-of-pocket costs for individuals with employer-sponsored insurance suggested that individuals enrolled in the benchmark plans could also generally incur costs that are lower than the maximum costs established by their plan. For example, the Health Care Cost Institute, an organization that provides information for researchers on health care spending and utilization trends, reported that the average out-of-pocket amount spent per consumer was $735 in 2011 for health care services through employer-sponsored insurance, which was lower than the lowest maximum costs established by our selected benchmark plans. According to state CHIP officials in all five states, CHIP costs to consumers, including premiums, copayments, coinsurance, and deductibles, are expected to remain largely unchanged in 2014. All five states said they currently have no plans to raise premiums or change cost-sharing amounts in 2014. In contrast, QHP costs to consumers in 2014 may be different than those in the benchmark plans as benchmarks are not models for QHP cost- sharing. Instead, PPACA included provisions applicable to QHPs that will limit premium variation, standardize plan values, and limit out-of-pocket costs. For example, PPACA will limit premium variation in the individual market by prohibiting health plans from adjusting QHP premiums based on factors such as health status and gender. Instead, plans will only be allowed to adjust premiums for family size, geographic area, age, and tobacco use. PPACA standardizes plan values through QHP coverage level requirements. Specifically, QHPs must offer coverage that meets one of four metal tier levels, which correspond to actuarial value percentages that range from 60 to 90 percent: bronze (an actuarial value of 60 percent), silver (an actuarial value of 70 percent), gold (an actuarial value of 80 percent), or platinum (an actuarial value of 90 percent). Actuarial value indicates the proportion of allowable charges that a health plan will pay, on average—the higher the actuarial value, the lower the cost-sharing expected to be paid by consumers. Deductibles, co-pays, and coinsurance amounts can vary within these plans, as long as the overall cost-sharing structure meets the required actuarial value levels. PPACA establishes out-of-pocket maximum costs on cost-sharing that apply to all QHPs and vary by income, a change from the non- income-based out-of-pocket maximum costs found in our selected benchmark plans. These maximums for individual plans do not include premiums or costs associated with non-EHB services, but do include deductibles. See table 3. SADPs have out-of-pocket maximum costs that are in addition to the QHP maximums described above and therefore may increase potential maximum costs for families who purchase them. For 2014, the out-of-pocket maximum costs for SADPs offered in federally facilitated exchanges and state partnership exchanges are $700 for a plan with one child or $1,400 for a plan with two or more children. For example, a family at 225 percent of the FPL that enrolls their two children in an SADP in addition to their QHP would be subject to an out-of-pocket maximum cost of $11,800. Additionally, PPACA includes provisions aimed at reducing cost-sharing amounts for certain low-income consumers and eligible Indians who purchase QHPs through an exchange in the individual market. For example, PPACA and federal regulations provide cost-sharing subsidies to individuals with incomes between 100 and 250 percent of the FPL to offset the costs they incur through copayments, coinsurance, and deductibles in a silver-level QHP. The cost-sharing subsidies will not be provided directly to consumers, instead, QHP issuers are required to offer three variations of each silver plan they market through an exchange in the individual market. These plan variations are to reflect the cost-sharing subsidies through lower out-of-pocket maximum costs, and, if necessary, through lower deductibles, copayments, or coinsurance. Once the adjustments are made, the actuarial value of the silver plan available to eligible consumers will effectively increase from 70 percent to 73, 87, or 94 percent, depending on their income levels. However, cost-sharing subsidies are not available for pediatric dental costs incurred by a consumer enrolled in a QHP and an SADP. PPACA also provides a premium tax credit to eligible individuals with incomes that are at least 100 percent and no more than 400 percent of the FPL when purchasing a plan with a premium no more than the second-lowest cost silver plan in their state. Depending on their income, this provision limits the amount families must contribute to QHP premiums to 2 to 9.5 percent of their annual income; in 2014 these premium contributions will range from $471 to $8,949 for a family of four.cost-sharing subsidies, which generally do not apply to costs incurred for services by a consumer enrolled in an SADP, the maximum contribution amount on premiums includes premiums for both QHPs and SADPs, if Unlike relevant.federal poverty level. When asked a series of questions about access to care, MEPS respondents with children covered by CHIP reported positive responses to nearly all questions regarding their ability to obtain care and at levels that were generally comparable to those with other types of insurance. MEPS includes questions about respondents’ ability to obtain care, and responses to these questions can provide insight to an individual’s access to services. In examining questions related to having a usual source of care, getting appointments and care when needed, and accessing care, tests, or treatment or seeing specialists when needed, most respondents with children enrolled in CHIP had positive responses to questions for Specifically, five of the six MEPS calendar years 2007 through 2010.questions we analyzed related to respondents’ ability to obtain care. At least 88 percent of CHIP enrollees responding to these questions reported they had a usual source of care and usually or always got the care they needed. When compared to respondents with other sources of insurance, the proportion of CHIP enrollees’ with positive responses to these questions were, for most questions, comparable to respondents with Medicaid or with private insurance—that is, within 5 percentage points. For example, about 89, 91, and 93 percent of CHIP, Medicaid, and privately insured respondents, respectively, reported that they had a usual source of care. The proportions of CHIP enrollees and those who were uninsured reporting positive responses were also within 5 percentage points on four of the six questions, but the differences were larger for the remaining two questions. Specifically, about 56 percent of those who were uninsured reported having a usual source of care compared to about 89 percent of CHIP enrollees, and about 75 percent of those who were uninsured reported that it was usually or always easy to see a specialist compared to about 81 percent of CHIP enrollees. The area of greatest dissatisfaction appeared to be related to ease in seeing a specialist. Approximately 18 percent of CHIP enrollees reported that it was sometimes or never easy to see a specialist. (See table 5.) Additional MEPS questions related to respondents’ use of certain medical and dental visits also provide insight on respondents’ access to services and suggest that, for most services, access to care for individuals covered by CHIP is comparable to that of those with Medicaid and lower than that of the privately insured, particularly for dental care. MEPS questions ask about respondents’ health care visits, including office- based health provider, emergency room, and dental visits, in the year prior to the survey. Respondents with children in CHIP reported using services at rates generally comparable to those with Medicaid and lower—except for emergency room visits, which were higher—than those with private health insurance, particularly for oral health care. A higher proportion of CHIP respondents reported using health care services compared to those who were uninsured. For example, about 51 percent of those with private insurance reported visiting a dentist in the past 12 months compared to about 42 percent of CHIP respondents. Additionally, 69 percent of CHIP respondents reported having an office- based provider visit compared to about 50 percent of respondents who were uninsured. (See table 6.) Because factors other than insurance coverage may affect these observed differences in responses about obtaining care or utilization of health care services, we ran a logistic regression to determine whether differences between CHIP respondents and those with other sources of insurance coverage were significant after controlling for other factors, such as age, race, and income levels. (See app. I for more detailed information on our model and results.) After controlling for these factors, we found that differences between CHIP and Medicaid responses were not statistically significant for any of the 12 questions we reviewed, and that the differences between CHIP and privately insured respondents were statistically significant for 4 questions, which related to respondents’ reported use of emergency rooms, dentist visits, orthodontist visits, and their reported ease in getting needed care, tests, or treatment. CHIP- covered individuals were more likely to report emergency room visits and visits to a general dentist, and less likely to report orthodontist visits and ease in getting needed care than the privately insured. More pronounced differences in reported access existed between CHIP enrollees and those who were uninsured. When comparing CHIP to the uninsured, differences in responses were statistically significant for 8 of the 12 questions we reviewed. Congress, HHS, and the states have important decisions to make regarding the future of CHIP. Congress will face decisions concerning CHIP funding as current funding has been appropriated only through federal fiscal year 2015. The Secretary of HHS will face decisions around parameters by which QHPs offered by exchanges can be considered to be comparable to CHIP plans. Beginning in October 2015, if CHIP funding is insufficient, states will need to have procedures in place to enroll CHIP-eligible children in Medicaid, if eligible, and, if not, in QHPs as long as the Secretary of HHS has certified the QHPs are comparable to CHIP in covered services and cost-sharing protections. Although state officials in the five states we reviewed expect the CHIP landscape to remain relatively stable over the next year, uncertainty remains regarding issuer decisions and the implementation of other PPACA provisions. This uncertainty complicates making a definitive determination of what CHIP enrollees would face if they were to obtain QHP coverage rather than be enrolled in CHIP. To some extent, coverage and costs in QHPs will be determined by individual states, issuers, and families’ choices. For example, individual issuers of QHPs in many states will define the habilitative services they cover and the limits on services they cover, including ones that are required under PPACA but that they may not have previously covered. In many states, families seeking coverage through exchanges will be allowed to choose whether to obtain pediatric dental coverage by enrolling in a stand-alone dental plan, which will affect upfront and other costs they face. Yet, some—or many—families may choose not to purchase dental coverage that all CHIP plans must cover. PPACA provisions, which seek to standardize QHP costs and reduce cost-sharing for certain individuals, could narrow the cost gap we identified, but will vary by consumers’ income level and plan selection. Assessing the comparability of CHIP and QHP plans will require ongoing monitoring of a complex array of factors. We provided a draft of this report for comment to HHS. HHS officials provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from its date. At that time, we will send copies to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Katherine Iritani at (202)512-7114 or iritanik@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To describe how access to care for CHIP children compares to other children, we analyzed data from the Medical Expenditure Panel Survey (MEPS), a nationally representative survey that collects data from a sample of non-institutionalized Americans on their health insurance status and service utilization, among other factors. MEPS is administered by the Department of Health and Human Services’ (HHS) Agency for Healthcare Research and Quality (AHRQ) and collects information from respondents on many topics, including demographic characteristics, insurance status, health conditions, and their use of specific health services. We analyzed results from the MEPS household component, which collects data from a sample of families and individuals in selected communities across the United States and is drawn from a nationally representative subsample of households that participated in the prior year’s National Health Interview Survey. The MEPS household component features five rounds of interviews, which occur over two full calendar years. MEPS collects information for each person in the household, and information is generally provided by an adult member of the household. We did not include questions that focused on the quality of care received. identify which respondents were eligible for CHIP versus Medicaid. To ensure we had a large enough sample size for our analysis of CHIP- eligible respondents, we included respondents who were continuously enrolled in CHIP for at least 8 months, and we analyzed responses from respondents enrolled in CHIP, Medicaid, or private insurance for at least eight months or who were uninsured at least 8 months out of the year. In addition, we pooled MEPS survey results from 2007 through 2010, the most recent, complete MEPS data available at the time of our analysis, and combined response choices for some of the MEPS questions. For example, some questions had several response choices, such as “always,” “usually,” “sometimes,” or “never.” We combined the four response choices into two response choices (e.g., “usually or always” and “sometimes or never”). Despite these efforts, 8 questions that we originally selected for analysis were excluded because of an insufficient number of responses. Nine additional questions were excluded due to our determination that they were redundant of other questions. As a result, our analyses focused on 12 MEPS questions: 6 questions asked about respondents experiences obtaining care and 6 questions asked about their utilization of specific services. (See table 7.) Because factors other than insurance coverage—such as income, parent education, and family composition, may affect access to care, we also ran a multivariate logistic regression analysis of responses to these 12 questions. Based on the literature and in consultation with experts at AHRQ and the Urban Institute, an organization that has conducted past research on access to care using MEPS data, we identified a number of factors in addition to insurance that could influence access to care and constructed logistic regression models to control for the effects of these factors on our results. The factors we included were age, race, income, total number of parents in the household, parent education, family size, health status, mental health status, children with special needs, total number of workers in the household, metropolitan statistical area, sex, whether the respondent was born in the United States, and English versus non-English speakers. We then tested whether there was a statistically significant difference in the effect of enrollment in CHIP versus other types of insurance coverage on responses to questions about access to care after controlling for these factors. For 9 of the 12 questions in our analysis, there were statistically significant differences between CHIP and certain comparison groups after controlling for other factors. To determine the reliability of the MEPS data, we reviewed related documentation, identified other studies, including our prior reports, that used MEPS data to address similar research questions, and consulted researchers at AHRQ and the Urban Institute about our analysis. We determined that the MEPS data were sufficiently reliable for the purposes of our report. However, there were several limitations to our analysis. First, to separate CHIP and Medicaid respondents, we relied on state CHIP and Medicaid income eligibility and income disregard rules reported by Kaiser between 2007 through 2010, and did not independently verify these data. In addition, the information available from Kaiser on each state’s income disregard rules was limited and had not been uniformly updated since 2008. Therefore, to account for potential gaps in information, we applied the income disregard rules from the 2008 Kaiser report to MEPS results from 2007 and 2008, and applied unverified 2010 income disregard rules from Kaiser to MEPS results from 2009 and 2010. When discrepancies between the 2008 and 2010 Kaiser data existed, we contacted states for clarification. In the event we could not verify the change in income disregard rules, which was the case with two states, we applied the 2008 income disregard rules for MEPS survey results to all 4 years of our analysis, 2007 through 2010. In addition, our analysis did not account for earnings disregards related to child care expenses, child support paid, or child support received; therefore, the groups we identified as Medicaid- or CHIP-eligible may be understated. Further, our analysis also did not account for income-ineligible respondents. Therefore, there may be some overlap between Medicaid and CHIP respondents or under- Finally, because our analyses reflect an reporting of CHIP respondents.eight-month period of enrollment or uninsurance, the responses may not precisely align with the respondents’ current health insurance status, particularly because several MEPS questions refer to respondents’ experiences and utilization over the prior 12 months. Legend: = yes;  = no. Rehabilitation is provided to help a person regain, maintain or prevent deterioration of a skill that has been acquired but then lost or impaired due to illness, injury, or disabling condition. While PPACA and its implementing regulations do not define habilitative services, habilitation has been defined by several advocacy groups as a service that is provided in order for a person to attain, maintain, or prevent deterioration of a skill or function never learned or acquired due to a disabling condition. State selected CHIP as its supplemental vision benchmark plan. Tables 8 through 12 provide information on copayments, coinsurance, and annual coverage limits for selected services in the State Children’s Health Insurance Program (CHIP) and benchmark plans in each of the five states we reviewed: Colorado, Illinois, Kansas, New York, and Utah. States’ CHIP and benchmark plans may also include a deductible, which was the case for all five states’ benchmark plans and one state’s CHIP plan. For all five states, cost-sharing for individuals and families was also subject to an out-of-pocket maximum cost. For CHIP enrollees, the out-of- pocket maximum cost amount was applied by the plans as established by federal statute, limited to 5 percent of a family’s income, and included all For the benchmark plans, the out- consumer costs, including premiums.of-pocket maximum cost for benchmark plans was established by each issuer, did not include premium costs, and was sometimes in addition to the deductible. In addition to the contact named above, Susan T. Anthony, Assistant Director; Carolyn Fitzgerald; Toni Harrison; Laurie Pachter; Teresa Tam; and Hemi Tewarson made key contributions to this report. Children’s Mental Health: Concerns Remain about Appropriate Services for Children in Medicaid and Foster Care. GAO-13-15. Washington, D.C.: December 10, 2012. Medicaid: States Made Multiple Program Changes, and Beneficiaries Generally Reported Access Comparable to Private Insurance. GAO-13-55. Washington, D.C.: November 15, 2012. Children’s Health Insurance: Opportunities Exist for Improved Access to Affordable Insurance. GAO-12-648. Washington, D.C.: June 22, 2012. Medicaid and CHIP: Most Physicians Serve Covered Children but Have Difficulty Referring Them for Specialty Care. GAO-11-624. Washington, D.C.: June 30, 2011. Medicaid and CHIP: Given the Association between Parent and Child Insurance Status, New Expansions May Benefit Families. GAO-11-264. Washington, D.C.: February 4, 2011. Oral Health: Efforts Under Way to Improve Children’s Access to Dental Services, but Sustained Attention Needed to Address Ongoing Concerns. GAO-11-96. Washington, D.C.: November 30, 2010. Medicaid: State and Federal Actions Have Been Taken to Improve Children’s Access to Dental Services, but Gaps Remain. GAO-09-723. Washington, D.C.: September 30, 2009.
More than 8 million children were enrolled in CHIP--the federal and state children's health program that finances health care for certain low-income children--in 2012. PPACA appropriated funding for CHIP through federal fiscal year 2015. Beginning in October 2015, any state with insufficient CHIP funding must establish procedures to ensure that children who are not covered by CHIP are screened for Medicaid eligibility, and if ineligible, are enrolled into a QHP that has been certified by the Secretary of Health and Human Services (HHS) as comparable to CHIP. Exchanges are marketplaces for QHP coverage effective in 2014. GAO was asked to review issues related to CHIP. This report provides a baseline comparison of coverage and costs to consumers in separate CHIP plans and benchmark plans in select states; describes how coverage and costs might change in 2014; and describes how access to care by CHIP children compares to other children nationwide. For the coverage and cost comparison, GAO reviewed Evidences of Coverage from separate CHIP plans and benchmark plans (base and supplemental) from five states--Colorado, Illinois, Kansas, New York, and Utah--selected based on variation in location, program size, and design. GAO reviewed documents and spoke to officials from states' CHIP programs, exchanges, and benchmark plans, and from the Centers for Medicare & Medicaid Services. To describe access to care by children in CHIP compared to others with Medicaid, private insurance or without insurance, GAO analyzed nationwide data from HHS's MEPS from 2007 through 2010. In five selected states, GAO determined that the separate State Children's Health Insurance Program (CHIP) plans were generally comparable to the benchmark plans selected by states in 2012 as models for the benefits that will be offered through qualified health plans (QHP) in 2014. The plans were comparable in the services they covered and the services on which they imposed limits, although there was some variation. For example, in coverage of hearing and outpatient therapy services, the benchmark plan in one of the five states--Kansas--did not cover hearing aids nor hearing tests, while the CHIP plans in all states covered at least one of these services. Similarly, two states' CHIP plans and three states' benchmark plans did not cover certain outpatient therapies--known as habilitative services--to help individuals attain or maintain skills they had not learned due to a disability. States' CHIP and benchmark state plans were also similar in terms of the services on which they imposed day, visit, or dollar limits. Plans most commonly imposed limits on outpatient therapies and pediatric dental, vision, and hearing services. Officials in all five states expect that CHIP coverage, including limits on these services, will remain relatively unchanged in 2014, while QHPs offered in the exchanges will be subject to certain Patient Protection and Affordable Care Act (PPACA) requirements, such as the elimination of annual dollar limits on coverage for certain services. Consumers' costs for these services--defined as deductibles, copayments, coinsurance, and premiums--were almost always less in the five selected states' CHIP plans when compared to their respective benchmark plans. For example, the CHIP plan in the five states typically did not include deductibles while all five states' benchmark plans did. Similarly, when cost-sharing applied, the amount was almost always less for CHIP plans, and the cost difference was particularly pronounced for physician visits, prescription drugs, and outpatient therapies. For example, an office visit to a specialist in Colorado would cost a CHIP enrollee $2 to $10 per visit, depending on their income, compared to $50 per visit for benchmark plan enrollees. GAO's review of premium data further suggests that CHIP premiums are also lower than benchmark plans' premiums. While CHIP officials in five states expect consumer costs to remain largely unchanged in 2014, the cost of QHPs to consumers is less certain. These plans were not yet available at the time of GAO's review. However, PPACA includes provisions that seek to standardize QHP costs or reduce cost-sharing amounts for certain individuals. When asked about access to care in the national Medical Expenditure Panel Survey (MEPS), CHIP enrollees reported positive responses regarding their ability to obtain care, and the proportion of positive responses was generally comparable to those with Medicaid--the federal and state program for very low-income children and families--or with private insurance. Regarding use of services, the proportion of CHIP enrollees who reported using certain services was generally comparable to Medicaid, but differed from those with private insurance for certain services. Specifically, a higher proportion of CHIP enrollees reported using emergency room services, and a lower proportion of CHIP enrollees reported visiting dentists and orthodontists. HHS provided technical comments on a draft of this report, which GAO incorporated as appropriate.
Microenterprise development and its primary component, microfinance, emerged in the 1980s. Microfinance—the supply of micro loans, savings, and other financial services to the poor—has operated on the premise that the poor will invest loans in microenterprises, repaying the loans out of profits, and the enterprises will grow, potentially lifting large numbers of people out of poverty. Microfinance practitioners have demonstrated that the poor will repay loans on time and that, despite high transaction costs, micro loans can support financially viable lending institutions. Through the 1990s, a wide variety of institutions operating in many countries have used this model, adapting it to local conditions. By USAID’s definition, a microenterprise consists of a poor owner- operator and nine or fewer workers. Microentrepreneurs—typically small shopkeepers, craftspeople, and vendors—face a range of impediments to improving their standard of living. Most rarely have sufficient collateral to meet loan requirements at traditional banks; according to one report, only 2.5 percent of potential microenterprise operators have access to financial services other than moneylenders. USAID’s microenterprise development program focuses on three key areas: helping establish and fund MFIs to provide loans and other financial services to the poor and very poor, funding business development services (BDS) to help improve the business skills of microentrepreneurs and develop markets for microenterprises, and advocating for host government policy reforms to enhance microenterprise development. USAID has been the leading bilateral donor of funds and technical assistance to microenterprise development projects since 1988, when it began formally tracking funding. USAID provides funding through nongovernmental organizations, contractors, and, occasionally, government organizations that implement the client-level activities. USAID reported that it committed $158.7 million to microenterprise development activities in fiscal year 2001, compared with $164.3 million in fiscal year 2000. Almost two-thirds of USAID’s microenterprise development obligations in fiscal year 2001 were directed to providing financial services, principally for creating and strengthening existing credit and savings institutions; the other third went to BDS and policy activities. Figure 1 shows an overview of USAID’s microenterprise program, highlighting its microfinance component. In fiscal year 2001, USAID-assisted institutions served more than 2.8 million loan clients and more than 3.5 million savings clients. USAID-assisted institutions also provided BDS to more than 800,000 microenterprise clients, including market research, new product development and testing, technology development, business counseling, and marketing assistance. USAID’s microenterprise development program, which is highly decentralized, funded projects in 52 countries in 2001. USAID’s missions design, implement, and monitor the microenterprise projects, obligating about 80 percent of the program’s funding. In Washington, D.C., the Microenterprise Development Division provides policy guidance and manages a number of grants and, along with other USAID offices, provides about 20 percent of the agency’s microenterprise funding. The division also provides technical support for the missions and conducts research on microenterprise issues. USAID takes a collaborative approach in its microenterprise development program. For example, USAID policy is to identify, support, and strengthen existing MFIs with established performance records to help meet its microenterprise objectives. It also funds studies of the MFIs it has supported, to assess impacts and to identify best practices for both USAID and the entire microenterprise industry. One major USAID research project, Assessing the Impact of Microenterprise Services (AIMS), was initiated in 1995. Most of the AIMS studies focused on a single country or activity. However, two recent AIMS studies examined USAID- and non- USAID-funded microenterprise activities in seven countries and synthesized key findings from a number of other studies, respectively. USAID’s microenterprise development program has targeted the poor and the very poor (see fig. 2). The poor are those whose annual income is at or below the poverty line as defined by the host country. The very poor are those with an annual income 50 percent or more below the poverty line as established by the government of the country. In addition, the vulnerable nonpoor, who also receive microenterprise assistance, are those whose annual income is just above the poverty line. Since 1994, USAID’s policy has been to devote half of its microenterprise development resources to activities targeting the very poor. In 2000, this policy was established as law by the Microenterprise for Self-Reliance and International Anti-Corruption Act of 2000, which required that 50 percent of all microenterprise resources be targeted to very poor entrepreneurs. From 1994 until the implementation of the 2000 legislation, USAID defined loans to the very poor, or “poverty loans,” as those with an average balance of $300 or less per borrower (in 1994 dollars). The 2000 legislation established the loan level at $1,000 or less for Europe and Eurasia, $400 or less for Latin America, and $300 or less in the rest of the world. USAID annually collects data on its microenterprise program through the MRR. (See app. IV for a discussion of the MRR methodology.) Data are collected through surveys of USAID staff in headquarters, overseas missions, and institutions that receive USAID funding. USAID staff provide the data on funding obligations, and implementing institutions provide programmatic data on number and gender of clients, number of loans, amount of lending to the very poor, MFI sustainability self-assessments, BDS services provided, and policy initiatives. In addition to the USAID-supported studies, a large body of published and unpublished work has been generated by what is commonly referred to as the microfinance industry. This research includes empirical studies, a broad array of theoretical analyses, and materials on best practices. However, because the industry is relatively new, large gaps exist in important research areas. For example, experts generally agree that reliable and valid impact assessments are lacking or limited in scope. (See app. III for a review of 22 microenterprise studies.) USAID’s microfinance activities have achieved some of the agency’s key program objectives but made limited progress toward others. First, microfinance can help to alleviate the effects of poverty among participants. However, there is less evidence that microfinance has lifted or kept large numbers above the poverty line as established by host countries. Second, USAID’s program has generally reached the poor but not the very poor. Third, the program appears to have succeeded in encouraging the participation of women through micro loans. Finally, USAID has emphasized the importance of developing sustainable MFIs and has made some progress toward this goal. Our fieldwork and review of USAID-funded studies and research literature showed that microfinance can alleviate some of the impacts of poverty among recipients. USAID officials, implementing partners, and borrowers also concurred that it can help to mitigate some effects of poverty. Most borrowers said that the microfinance institutions were their only formal source of credit or savings and stated that the program helped improve their lives, incomes, businesses, and sense of self-worth. (See fig. 3.) The broader academic studies of microfinance we reviewed, as well as the experts we consulted, generally agreed that micro loans can help poor individuals, workers, and their families cope with personal and economic shocks, such as illness or death of family members, and manage risks associated with living in poverty. In addition, USAID officials noted that microfinance projects are demand driven, put money directly into the hands of the borrowers, and generally have loan repayment rates close to 100 percent. The experts with whom we spoke agreed that microfinance programs can have other benefits, such as helping to build a nation’s financial sector. The studies we reviewed and experts we consulted also posited that microfinance can help increase microentrepreneurs’ working capital, thereby enhancing their household income. However, positive impacts of specific microfinance projects have been limited in scope and have varied according to economic, social, and market conditions as well as the design and aims underlying particular programs. USAID-funded impact studies in two countries we visited, Peru and Egypt, found some positive effects from microfinance. For example, the study in Peru found that the program helped increase assets at the microenterprise level; it also noted some positive effects at the household and individual levels. According to the study, economic recession in Peru during 1997 and 1999 may have limited the effects of the small loans. A study in Egypt found that a microfinance program offering group lending to poor women helped its clients establish and expand their businesses. The loans also enabled them to improve their standard of living by contributing to the household budget, renovate their homes, and provide their children with a better education through private tutors. (See fig. 4.) Although microfinance can help alleviate some of the impacts of poverty, the research literature does not show conclusively that it has lifted large numbers of people above the poverty line. In our review, we identified two studies on this issue. The 2002 AIMS study—examining microfinance projects in three countries, two of which were USAID supported—found that “there was no dramatic movement of client households out of poverty over the two-year span of the study.” The second study analyzed the three major microfinance programs in Bangladesh—two of which received USAID funding in fiscal year 1998—and found that about 5 percent of program participants per year rose above the poverty line during the period covered by the study. The experts we consulted generally agreed that although microfinance can help to alleviate some of poverty’s impacts, too few long-term studies have been conducted to determine whether microfinance can lift, and keep, significant numbers of clients above the poverty line. These experts also emphasized that because the poverty line is a problematic and somewhat artificial measure, most impact studies have not focused on estimating the number of borrowers who cross and remain above it. The experts and practitioners we interviewed and whose work we reviewed now generally conclude that microfinance alone is not sufficient to lift large numbers of people out of poverty. The challenges the poor face—limited education, few opportunities, legal and cultural barriers—are difficult to overcome with micro loans. Moving out of poverty usually requires a combination of strategies by different household members, and, according to a USAID program official, “backsliding is possible and even frequent.” Although the agency’s microfinance activities serve the poor, they generally appear not to reach the very poor, according to our review of USAID studies and the research literature. In addition, as mandated by the Microenterprise for Self-Reliance and International Anti-Corruption Act of 2000, the agency uses small loan size as an indicator of loans to the very poor; however, this is now generally considered an inadequate measure of success in reaching that population. Moreover, some evidence suggests that micro loans can have unintended negative consequences among very poor borrowers. Finally, meeting the requirement of targeting 50 percent of microenterprise resources to the very poor could hamper MFI sustainability. USAID studies and other research literature on microfinance show that microfinance activities serve those clustered just above and below the poverty line but generally do not reach the very poor. According to the 2002 USAID-funded AIMS study, based on work in three countries, both the vulnerable nonpoor and the poor participate in the program, with the very poor making limited use of USAID-supported microfinance services. The 2000 AIMS study reported that in the projects studied in four countries, the majority of clients were poor, followed by the vulnerable nonpoor. This study also found that approximately 40 percent of USAID microfinance clients in Bangladesh were very poor but that in Bolivia, the Philippines, and Uganda, the number of very poor ranged from “almost none” to “some,” although it did not quantify the precise numbers. In addition, the 2000 AIMS study noted that 20 other microfinance impact studies had found limited participation by the very poor. The broader literature on microfinance confirms that the microfinance industry has reached the poor and vulnerable nonpoor but relatively few of the very poor. For example, one widely cited study found that microfinance lenders in Bolivia tended to serve those near the poverty line, not the very poor. During our fieldwork, representatives from USAID and their implementing partners told us, based on their experience with the program, that few loans went to the very poor—a finding generally consistent with academic studies of projects in other countries. USAID officials in the countries we visited said that the very poor rarely take out loans because they may lack the economic opportunities to repay the loans and are reluctant to increase their debt levels. According to the 2000 AIMS study, not enough information is available to determine whether (1) the very poor choose not to borrow to avoid additional debt; (2) MFI staff disqualify the very poor because of concern over their ability to repay the loans; or (3) other types of loans and services, such as savings or insurance, would better meet the needs of the very poor. Although most MFIs use small loan size as an indicator of loans to the very poor (as mandated in the 2000 act), in practice this is an inadequate method. It is based on the assumption that small loans appeal only to the very poor, and it is widely used in part because it is easy to administer. However, many practitioners, including USAID, now generally consider loan size an inadequate indicator of clients’ level of poverty. In June 2003, legislation was enacted amending the Microenterprise for Self-Reliance and International Anti-Corruption Act of 2000 to ensure the development of more precise poverty measurement tools. The amendments required USAID to develop, test, and certify at least two low- cost methods for determining recipients’ poverty level by October 1, 2004, and begin using one of the methods by October 1, 2005. The amendments also expanded the definition of the very poor to specifically include those living on less than $1 per day. Although some evidence suggests that micro loans may help alleviate the impacts of poverty, evidence also suggests that in some cases these loans may affect very poor borrowers more negatively than positively and may be more effective in combination with complementary services. Within the microfinance industry, little consensus exists about the effectiveness of micro loans to the very poor. USAID officials in the countries we visited stated that economic and social impediments in those countries often make loan repayment difficult for the very poor. In Peru, a representative of a large U.S.-based implementing partner told us that her organization typically does not lend to the very poor, considering social services, not loans, more appropriate for that population. In Egypt, one of the largest USAID-supported MFIs said that it has started a separate grant program to reach the very poor. USAID officials in Bulgaria we visited said that the poor were more able than the very poor to expand their enterprises and, as a result, to hire the very poor. In addition, a USAID program official stated that microfinance might not always be an appropriate intervention for the very poor, since they often cannot use the loans productively. Some research also indicates that micro loans alone may not be an appropriate assistance mechanism for people below a certain level of poverty because they may increase their debt to unmanageable levels. Other research has attempted to show that with a strong commitment to reaching the very poor, and with a well-targeted program attuned to the needs of very poor clients, microfinance can have positive impacts. At the same time, recent studies suggest that to reach the very poor, the microfinance industry needs to move beyond loans and offer the very poor other services, such as savings and insurance. For example, a 2002 strategic evaluation of the Consultative Group to Assist the Poorest (CGAP) stated that savings may be the most important financial service for the very poor, since it provides a way to accumulate money without risking debilitating indebtedness. In addition, the 2002 AIMS report and other research indicated that, because of difficulties in reaching the very poor with micro loans and the potential for indebtedness, there is a need to expand the type of products or assistance targeted to this group. These products can include savings, insurance, and money transfer services; nonfinancial business development services; and reforms of key policies, programs, institutions, and regulations that can affect the very poor. Last, a 2003 CGAP publication states that donor funding for microfinance should complement, not substitute, for investments in core services, like health, education, and infrastructure, a view that also reflects USAID’s policy, according to agency officials. USAID officials stated that implementing the requirement that 50 percent of funds be targeted to the very poor, based on the loan sizes set by the 2000 act, could make individual MFI sustainability more difficult to achieve. Officials at the missions we visited said that their primary objective was to develop sustainable MFIs. In Bulgaria, officials with USAID and its implementing partners, Catholic Relief Services (CRS) and Opportunity International, said that imposing this requirement on individual MFIs could create unsustainable institutions, because managing a high percentage of small loans would increase costs associated with servicing these loans. The mission in Egypt, which began its microfinance program in 1988, did not offer poverty lending until 1999, when it judged that the institutions it supported were financially viable and stable enough to begin making such loans. The research literature we reviewed also indicates that MFIs that are considered financially sustainable generally do not reach the very poor in large numbers. Our analysis of data in the MicroBanking Bulletin, a publication in which MFIs report financial and programmatic information, indicate a direct correlation between larger average loan size and increased financial sustainability (see fig. 5). Further, according to a CGAP assessment, donor confidence during the mid-to-late 1990s that most MFIs could both reach the very poor and become sustainable has since declined. A USAID program official said that the poverty lending requirement can work against the goal of developing sustainable MFIs, since it directs the agency to target half of its resources to those may be least able to repay the loans. The official added that by focusing its resources on the poor and the vulnerable nonpoor, who can use loans more productively, the agency could increase the likelihood of developing sustainable institutions. USAID data indicate that the agency succeeded in reaching large numbers of women clients through its microcredit activities in fiscal years 1997 to 2001 (see fig. 6). The broader research literature we reviewed shows that micro credit activities have successfully targeted women. Generally, the literature suggests that female clients have had better loan repayment rates and lower default rates than male clients. Microcredit services are of considerable importance to poorer women, who tend to have more limited access to other financial services than men. Research also shows that micro loans have generally improved female clients’ participation in decision making at the household and business levels. Our fieldwork indicated that USAID-supported MFIs’ focus on women varied by country and project type. In fiscal year 2001, more than two- thirds of the USAID-funded MFI micro loan clients in Peru were women, and in Egypt and Bulgaria, just under half of those clients were women. Within these countries, we found that project design affected women’s participation rates. Projects employing group lending or offering nonfinancial incentives, such as health care, tended to have a higher percentage of female clients. For example, as a result of group lending projects that began in 1999 for women from poor communities, the overall percentage of women clients across USAID-funded microfinance activities in Egypt increased from 17 percent in fiscal year 2000 to about 45 percent in fiscal year 2001. In Peru, MFIs such as ProMujer, whose clients are nearly all women, offer borrowers group loans, day care, health education, and medical referral services. (See fig. 7.) USAID has emphasized developing sustainable MFIs, and available data suggest that some progress has been made toward this goal. In fiscal year 2001, of the 294 USAID-funded MFIs that reported sustainability levels, 112 stated that they had achieved full sustainability. USAID does not collect sustainability data from MFIs that no longer receive funding. As a result, it lacks the long-term data needed to determine whether MFIs it has supported have continued to provide services in a sustainable manner and if so, for how long. The research literature we reviewed indicates that achieving full sustainability has been difficult for the broader microfinance industry. Further, the literature indicates that fully sustainable MFIs tend to reach larger numbers of borrowers. Finally, MFI sustainability can be transient, subject to factors such as mismanagement and economic shocks. Within the microfinance industry, USAID is a leader in promoting MFI sustainability. Before receiving USAID funding, an MFI must provide a plan outlining the major steps it will take to achieve sustainability. USAID expects MFIs to attain full sustainability within 7 years of receiving USAID assistance. USAID and other donors consider sustainability to be an important goal because it requires that MFIs manage operations efficiently and meet clients’ needs consistently. Further, achieving sustainability allows institutions to continue providing services after donor funding ceases. According to one CGAP official, “Aiming for sustainability is paramount.” USAID determines an MFI’s progress in achieving full sustainability by using an interim measure it calls operational sustainability. An MFI is considered operationally sustainable if revenues from interest and fees cover all of its operational expenses, including salaries and other administrative expenses. To be considered fully sustainable, the organization must cover both its operational and financial costs, such as the cost of borrowing funds at commercial interest rates, while taking into account inflation and any subsidies. Available data suggest that USAID-supported MFIs have made some progress toward achieving full sustainability. In fiscal year 2001, 294 USAID-supported MFIs reported sustainability levels; of these, 112, or 38 percent, said that they were fully sustainable, a percentage that had remained consistent since 1999. Because USAID does not monitor MFI sustainability once its funding stops, it lacks long-term data to determine whether the MFIs it has supported continue to be sustainable, and if so, for how long. To assess trends in MFI sustainability, we analyzed MRR data from 1995 to 2001; 45 MFIs reported sustainability data for both 1995 and 2001. Of these, 15 (33 percent) reported reaching full sustainability at the time of the 2001 survey. These percentages are similar to the percentage reported for a 2002 MicroBanking Bulletin survey of established MFIs but higher than those reported for the overall microfinance industry. According to officials from CGAP, the Foundation for International Community Assistance, and Americans for Community Cooperation in Other Nations (ACCION), of the approximately 10,000 MFIs currently operating, the number that are sustainable or are expected to survive in the long term ranges from an estimated “few dozen” to 250. However, those MFIs that are currently reported as sustainable serve about 80 percent of the total microfinance clients worldwide, according to these officials. The experts we interviewed agreed that the majority of microfinance clients are served by a few large, sustainable MFIs. In some cases, USAID has continued to fund MFIs that reported achieving full sustainability and MFIs that did not achieve sustainability within 7 years of receiving USAID funding. USAID officials said that the primary reason for continuing to fund these MFIs is to expand microfinance services to new areas. For example, in Egypt, one of the institutions listed as financially sustainable has received USAID funding for 14 years to support expansion, according to mission officials. In Bulgaria, an institution that had not attained operational sustainability received USAID funding for fiscal years 1995 to 2003 and is expected to continue receiving support until fiscal year 2006, when USAID is expected to end its microenterprise activities in that country. USAID officials in Bulgaria said that the country’s macroeconomic and financial instability, along with regulatory and legal hurdles, has adversely affected MFIs. The research literature we reviewed indicates that a large majority of existing MFIs are not, and are not expected to become, fully sustainable. The literature further indicates that MFIs with a large number of clients have higher levels of financial self-sufficiency and profitability than smaller MFIs. For example, Bank Rakyat Indonesia’s Micro Division had about 2.8 million borrowers and 27 million savings depositors in fiscal year 2001 and has reported full financial sustainability since the early 1990s. In addition, data reported in the MicroBanking Bulletin, based on financial and portfolio data of leading microfinance institutions worldwide, indicate that institutions with a large loan portfolio and number of clients have higher levels of financial sustainability than smaller institutions (see fig. 8). MFIs are considered financially sustainable when they can cover 100 percent of their operating costs, as well as the cost of borrowing funds at commercial rates. In analyzing the status of 81 USAID-funded MFIs that reported on financial sustainability over a 5- to 7-year period, we found that over one-fifth (18) reported achieving full sustainability but later reported that they were no longer fully sustainable. MFI sustainability can change rapidly because of various factors, as the following examples from Peru illustrate. PRISMA, one of the largest USAID-funded MFIs, became unsustainable because of mismanagement and the theft of $2 million by employees. (USAID officials said that steps had been taken to recover the funds and that PRISMA will remain ineligible for future support until this situation is resolved.) CARITAS, an MFI affiliated with Catholic Relief Services, experienced declines in full sustainability at five29 of its eight branches during a 19- month period. Over the next 4 months, the sustainability of two branches improved, but during the subsequent 23-month period, lenient loan repayment practices at three branches resulted in a significant decline in sustainability and a consequent decline in portfolio quality at those branches. Two branches that had been fully sustainable experienced significant declines in sustainability (49 percent and 14 percent, respectively), but managed to remain sustainable, albeit at a lower level. this MFI between fiscal year 2001 and fiscal year 2002, because loans were disbursed in dollars but collected in the local currency. Although the basic data collected for USAID’s MRR are generally reliable, certain methodological problems may impede accurate reporting on the agency’s progress in meeting key goals. Specifically, it may not be reporting accurately (1) the actual amounts obligated to microenterprise activities, (2) whether 50 percent of USAID’s resources went to the very poor, and (3) the sustainability of USAID-supported MFIs. Moreover, although the annual MRR reports on the overall activities of MFIs that receive any USAID monies, it does not provide sufficient data on USAID’s contribution to MFIs and other service providers. We assessed the reliability of basic MRR data in terms of accuracy, completeness, and consistency and found that they generally met these criteria (see app. I for our methodology and app. IV for a discussion of the MRR). These data include the number of clients, the percentage of women clients, and the dollar amounts of the institutions’ portfolios. USAID collects most of the data via surveys filled out by the institutions receiving USAID assistance. According to the contractor responsible for collecting and analyzing the MRR data, the survey questions for institutions were pretested and should be understood by respondents. (The survey is available in English, Spanish, and French.) The contractor and USAID officials stated that they review the data for completeness, accuracy, and consistency. MRR staff reported that they compared current and past year survey responses to identify inconsistent responses and investigated these responses as warranted. Although two of the three USAID missions we visited did not perform the checks recommended by USAID’s policy guidance, most of the data we examined were generally accurate. We observed several problems with the reporting of USAID’s obligations that may affect the reliability of the data in the MRR. First, the 2001 MRR publication includes obligations for many clients and institutions that do not meet the MRR’s definition of a microenterprise as “compris 10 or fewer employees, including unpaid family workers, in which the owner/operator of the enterprise . . . is considered poor.” For example, of the roughly 120 institutions that reported BDS and policy development programs in the 2001 MRR publication, more than half reported serving some clients whose incomes were above the national poverty line or who owned businesses that were not microenterprises. In addition, at least one- third of all BDS clients included in the MRR in 2001 had estimated incomes above the poverty line. Furthermore, in Peru, an institution that had received an obligation of $1.2 million in 2001 reported all its clients to the MRR, even though only about one-third of its clients were microentrepreneurs. Finally, almost a quarter (12 of 42) of the USAID- supported MFIs in Eastern Europe reported to the MRR loans exceeding $10,000, despite the regional loan size limit of $10,000. Second, underreporting of microenterprise obligations in the MRR may occur. According to the USAID contractor, some of the missions that report may not list all obligations for microenterprise activities. For example, in 2000, the contractor who collects the MRR data found $7 million of underreported microenterprise obligations, which was subsequently included in the obligations totals. In addition, the MRR does not track expenditures for its microenterprise programs, and obligations reported to the MRR may not accurately reflect actual program expenditures. During our fieldwork, we found situations where obligations differed significantly from actual expenditures. For example, for fiscal years 1991 to 2000, the MRR reported obligations of about $160 million to microenterprise programs at the USAID mission in Egypt. However, mission officials told us that the program actually spent about 50 percent of this obligation. USAID reported that 53 percent of its obligated microenterprise funds went to the very poor in fiscal years 2000 and 2001. However, our analysis indicates that the MRR may not accurately estimate the percentage of microenterprise development funding that is targeted to the very poor. We found the following limitations: The MRR lacks information on poverty lending for a significant portion of total microenterprise obligations. In fiscal year 2000, it had data for only 32 percent of obligations, and for fiscal year 2001, it had data for only 41 percent of obligations (see fig. 9). USAID’s method for calculating overall poverty lending extrapolates from the available data and assumes that institutions that did not respond to the MFI survey provided the same amounts of poverty lending as those that did respond. However, unlike the respondents, many of the nonrespondents did not make loans or performed activities that were not directly involved with poverty lending. Nonrespondent activities, which totaled $94 million—roughly 60 percent of the total fiscal year 2001 obligations—included a range of services. For example, in 2001, USAID obligated about $5 million to support its microenterprise staff and about $2 million for research and other support activities. Many BDS programs that report on outreach to the very poor are likely to provide inaccurate data. While MFIs report the dollar value of poverty loans they have made, many BDS providers must estimate and report the number of their clients who have received poverty loans from any source—data that, according to a USAID program official, the BDS providers often lack. According to a USAID program official, the agency went to considerable effort to collect data from institutions that make poverty loans. USAID officials acknowledged that it is difficult to estimate future poverty lending for institutions that have not yet begun to make poverty loans and those that provide services rather than loans to clients. However, USAID’s annual MRR report does not inform the reader of the extent or impact of these limitations. Because USAID-supported MFIs use different definitions to calculate sustainability, the sustainability data reported in the MRR may not be reliable. USAID supplies differing definitions of sustainability, one for the MRR and one for its Implementation Grant Program awards. In addition, not all MFIs reporting to the MRR use the definition suggested there; for example, one MFI with affiliates in more than 20 countries requires its affiliates to report sustainability to the MRR using a more stringent definition. Further, MFIs can and do interpret the underlying MRR definition of sustainability differently; for example, some basic terms such as “financial costs” are not defined and are subject to various interpretations. As a result, the contractor responsible for collecting and analyzing these data stated that they should not be considered reliable. The MRR does not provide information on USAID’s level of contributions to MFIs and other service providers it supports, making it difficult to determine the scope of the agency’s microenterprise development funding. The MRR reports an MFI’s total number of clients and loans, regardless of the level of USAID’s contribution to that MFI. For example, in 2000, USAID obligated $400,000 to an MFI in Ecuador that reported loans of $80 million and $477,000 to an MFI in Senegal that reported loans of $336,000. As a result, the annual report lists a large number of clients, loans, and other activities that were not funded by USAID and in many cases were funded by other donors, foundations, and private individuals. In addition, USAID requires institutions that provide technical assistance to MFIs to complete the MFI survey. Because these technical assistance providers make no loans, reporting the number of loans and clients served by the MFIs they assist may provide an inaccurate impression of USAID’s micro lending activities. For example, we found that the institution listed as in the MRR as the largest lender in Peru in 2001 did not make any loans or serve any clients directly. Instead, it provided technical assistance to more than 20 MFIs. However, the MRR reported that this institution had a $37 million loan portfolio and 20,000 clients. According to the contractor responsible for the MRR, USAID is relying increasingly on technical assistance providers that serve lending institutions. USAID has funded several studies and projects, such as the Microenterprise Best Practices project, to publish emerging best practices. In addition, the agency has provided information on best practices to missions and implementing partners through policy guidance, training, and technical assistance. USAID has also collaborated with implementing organizations, microenterprise networks, and donors in disseminating information on best practices. Several organizations have published such information, including USAID; the Committee of Donor Agencies for Small Enterprise Development, whose secretariat is hosted and staffed by the World Bank; the Donor’s Working Group on Financial Sector Development; Catholic Relief Services (CRS); and ACCION. (See app. II for a list of some key best practices.) According to officials from the World Bank and other organizations, USAID has recognized the importance of identifying and disseminating successful and unsuccessful attempts to design and implement microenterprise activities. USAID’s efforts include the following. Growth and Equity through Microenterprise Investments and Institutions (GEMINI). Through its GEMINI project, which ended in 1995, USAID supported more than 120 studies on microenterprise development to publicize the experiences of leading microenterprise practitioners and experts in analyzing and managing microenterprise activities. These studies focused on the growth and dynamics of microenterprise in general and new approaches to delivering financial and nonfinancial assistance. The studies included data collection strategies for surveys; strategies for MFIs in Indonesia to more profitably provide financial services to the poor, strategies for MFIs to help microenterprises grow into small enterprises, recommended options to improve support for microenterprise development in Ecuador, and analyses of the importance of providing needed equipment to MFIs. Microenterprise Best Practices project. USAID funded the Microenterprise Best Practices project, a research-oriented effort to develop and disseminate best practices. The project, completed in 2001, resulted in more than 100 reports, including concept papers, case studies, and technical tools and manuals providing guidance for designing and managing microenterprise activities. These reports included a model for Internet-based information for microenterprises in the Philippines, a description of Opportunity International’s experience in Bulgaria and Russia in managing a microfinance program during a period of high inflation, a guide for reporting financial performance, and case studies of the difficulties encountered in converting nongovernmental institutions to commercial banks in Bolivia and Panama. Guidance. USAID policy guidance for microenterprise development encourages missions to develop broad outreach activities to as many of the poor as possible, requires that MFIs charge unsubsidized interest rates to borrowers to cover the cost of operations, advises that missions consider and address host government policy constraints, and emphasizes the need for steady movement toward sustainability to achieve significant impact and institutional viability. Further, USAID policy states that missions providing assistance for microenterprise development should monitor and report on their outreach to the poor, including the distribution of its loan portfolio by loan amount. Training. USAID supports training courses for its own and implementing organizations’ staff in designing and executing microenterprise activities. In addition, USAID provides funding to the Microenterprise Development Institute at Southern New Hampshire University and to Business Development Services Training Program at the Springfield Centre in Durham, United Kingdom, to support their microenterprise development training programs. USAID’s Accelerated Microenterprise Advancement Project provides scholarships to USAID staff for microenterprise development training and exchange programs. USAID also provides funding to the Small Enterprise Education and Promotion Network (SEEP), a network of nongovernmental organizations that implement microenterprise activities, for scholarships to enable USAID employees and practitioners to attend SEEP’s training courses. During our fieldwork, we found that several USAID officials working on microenterprise development had received training at these locations. Technical assistance. USAID’s Prime Grant project provides direct technical assistance to missions in planning microenterprise activities. The project provides information on advances in microenterprise development and lessons learned from the missions’ counterparts throughout the agency. USAID’s Accelerated Microenterprise Advancement Project provides missions technical assistance from microenterprise experts and information on ongoing research and learning in microenterprise development. Missions may also receive technical assistance in developing scopes of work, including sample scopes, and ongoing support throughout the procurement process. USAID has collaborated with implementing partners, networks of implementing organizations, and the World Bank in identifying and promoting best practices. These organizations have published handbooks, bulletins, and other documents on best practices; maintained Internet sites devoted to best practices; and sponsored seminars and workshops. For example: CRS and ACCION International, two USAID-supported nongovernmental organizations, have published handbooks to assist in designing and implementing microenterprise activities. CRS also established the Microfinance Alliance for Global Impact project to help its implementing institutions strengthen their activities. The Foundation for International Community Assistance produced an evaluation of current practices used by microfinance institutions in assessing client poverty levels. Implementing partners such as ACCION International and Opportunity International also sponsor regional conferences, workshops, and seminars on best practices. USAID has provided funding to SEEP to support its efforts to promote best practices. According to SEEP’s Executive Director, USAID has been one of its leading supporters, providing funding and technical assistance and participating in the network’s conferences and workshops. SEEP has published two studies on microenterprise best practices. The network also sponsors conferences and workshops on improving microenterprise activities. USAID is a member of the World Bank’s Committee of Donor Agencies for Small Enterprise Development and Donor’s Working Group on Financial Sector Development, which have published guiding principles for designing and implementing microenterprise activities in 1995 and in 2001. The World Bank has also published reports to support the design and implementation of microenterprise activities, such as a handbook, to assist in operational planning and internal audits of activities. The group also provides information through MicroBanking Bulletin and the Microfinance Information Exchange. We found that microenterprise projects—including those funded by USAID—can help alleviate some of the impacts of poverty on individuals, households, and families. However, evidence suggests that microfinance alone has not lifted large numbers of the poor over the poverty line. In addition, despite USAID’s use of micro loans to target the very poor, as mandated, few loans appear to be reaching this group, in part because loan size is an inadequate targeting method. Other evidence suggests that loans to the very poor can place some borrowers at risk of unmanageable debt and may be more beneficial when offered with other financial services such as savings and insurance and with development assistance such as grants, health services, education, and housing. Efforts to reach the very poor that do not recognize and address these key concerns may not be fully effective. Despite the general reliability of its data, certain methodological weaknesses in USAID’s MRR system may prevent the agency from reporting with precision its program expenditures, the percentage of its funds going to the very poor, the percentage of MFIs that are sustainable, and the extent of USAID’s contributions to the institutions it supports. We recommend that the Administrator of USAID review the agency’s MRR system with the goal of ensuring that its annual reporting is complete and accurate. Specifically, the Administrator should review and reconsider the methodologies used for collection, analysis, and reporting of data on annual spending targets, outreach to the very poor, MFI sustainability, and the contribution of USAID funding to the institutions it supports. USAID provided written comments on a draft of this report (see app. V). USAID concurred with the report’s recommendation that it make improvements in its MRR. The agency cited three points with which it took issue, related to reaching the very poor, the sustainability of MFIs it supports, and its reporting of contributions to institutions. USAID stated that the number of small loans it had issued indicated that it was reaching the very poor. As discussed in our report and acknowledged in USAID’s comments, loan size is now recognized as an inaccurate indicator of the extent to which this program is reaching the very poor. Given this limitation, we reviewed detailed impact studies that collected information on borrowers’ economic status (see app. III for a summary of key studies on this topic); further verified this information through detailed discussions with international experts, USAID officials, and their implementing partners working with USAID-funded programs; and conducted detailed program reviews in three countries . The general consensus across the studies, experts, and program implementers is that microfinance projects serve those clustered around the poverty line but generally do not reach the very poor. USAID also stated that, contrary to our report, the agency uses a single definition of sustainability, and it inferred that the sustainability data reported in the MRR was accurate. We disagree with USAID on these points: We documented several definitions and interpretations that affect the reliability of the reported data, and we have added information to the report to clarify our concern regarding the agency’s method for measuring microfinance institutions’ sustainability. As noted in the report, 38 percent of MFIs that received USAID funding in fiscal year 2001 reported that they had achieved financial sustainability. The higher figure cited in USAID’s response combined data on operational and financial sustainability, despite the fact that operational sustainability is, by USAID’s definition, an interim measure toward the goal of achieving full financial sustainability. USAID stated that it would be difficult to allocate the microenterprise accomplishments reported in the MRR between USAID and other donors. However, it said that it plans to include more explicit language in the MRR to indicate that results are generally reported for entire institutions and that the resources of other donors and supporters contributed to the results. In its comments, USAID also agreed to (1) provide more explicit instructions on what activities to include in the MRR; (2) revise the formula for estimating the extent of funding that benefits the very poor and include in its annual report additional language concerning the formula; (3) improve the accuracy of data on obligations and poverty lending; and (4) adopt a new standardized definition of sustainability if one is adopted by the field. We believe that these improvements would be responsive to our recommendation and, if made, could improve the accuracy and balance of the MRR. We will send copies of this report to interested congressional committees as well as the USAID Administrator. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149 or at gootnickd@gao.gov. Other contacts and staff acknowledgments are listed in appendix VI. To determine whether the U.S. Agency for International Development’s (USAID) microenterprise development program is meeting its key objectives, we first identified those objectives by reviewing the agency’s policy guidance and the pertinent legislation. We also held discussions with USAID officials in Washington, DC. To determine the results of the agency’s microenterprise assistance, we met with officials and reviewed documents from USAID and their implementing partners in Washington, Peru, Egypt, and Bulgaria, and we met with program beneficiaries in these three countries. We selected Peru and Egypt because they have mature programs that have existed since the late 1980s and received high levels of USAID obligations over the past 10 years. These countries also represent culturally different areas (e.g., the program in Peru serves a large indigenous population, and the primarily business-led program in Egypt serves a combination of urban and rural areas). We selected Bulgaria because the program was relatively new; per capita income and the gross domestic product were high; and participants were reported by USAID to have higher educational levels and to be operating different types of businesses than in Africa, Asia, or Latin America. In addition, we reviewed a broad range of program and academic studies on the issues and conducted interviews and round-table discussions with academics and practitioners who have expertise regarding the ability of microenterprise activities to meet USAID’s objectives. We also reviewed USAID studies that pertained to countries we visited, as well as studies that assessed project impact related to key program objectives. Because most available USAID data and most of the research literature focuses on microfinance, particularly micro loans, we concentrated our review primarily on this aspect of microenterprise development. To assess the reliability of the Microenterprise Results Reporting (MRR) data, we reviewed the survey questionnaires that are used to collect the data, noting strengths and weaknesses in the survey design. We also conducted a variety of analyses of the MRR database. Our analyses focused on the data on obligations supplied by the USAID missions and the data on microenterprise activities supplied by microfinance institutions (MFI), business development service (BDS) providers, and policy service providers from 1995 through 2002. We conducted interviews focused on data reliability with the contractor that manages the data collection and analysis and drafts the MRR reports. In these interviews, we asked how the survey data are collected, what quality checks are performed, and what other internal controls are in place. On our field trips to Peru, Egypt, and Bulgaria, we conducted data reliability interviews with officials at all three USAID missions and at six institutions that had received USAID funding. During our meetings with USAID missions and the institutions, we conducted spot checks of key MRR data to assess their reliability. We found that the reliability of the lending and BDS institutions’ data on the percentage of women clients sufficed for our analysis, provided we noted that some BDS providers could not directly estimate these percentages. The data on lending institutions’ sustainability were of uncertain reliability because of inconsistencies in the way respondents interpreted the MRR survey question; however, these data were consistent with the testimonial and documentary evidence that we gathered. To examine USAID’s role in identifying and disseminating best practices, we reviewed (1) USAID policy guidance, (2) USAID country strategies and annual reports for the three countries we visited, and (3) other relevant USAID documents. We also reviewed a wide body of literature on the subject, including World Bank publications and the MicroBanking Bulletin; analyses of best practices produced by donor groups; handbooks, analyses, and other documents produced by USAID implementing organizations such as Catholic Relief Services and Opportunity International; and studies and analyses by recognized microenterprise experts. We interviewed USAID officials in the Microenterprise Development Division, the regional bureaus that oversee mission activities, and the countries we visited, including officers responsible for economic growth and microenterprise activities. We also interviewed officials of the World Bank and from implementing organizations in Washington, D.C.; Baltimore, Maryland; and the countries we visited. Finally, we attended a roundtable on best practices whose members included recognized experts on microenterprise development from the World Bank, implementing organizations, and academia. The World Bank also provided informal comments on a draft of this report. We conducted our review from December 2002 through September 2003 in accordance with generally accepted government auditing standards. Best practices are processes, practices, and systems that have been used by organizations and widely recognized as improving performance in achieving program goals. Although the research literature and our fieldwork indicate that no standard manual of best practices exists for microenterprise development, a core of preferable strategies (best practices) has emerged within the microenterprise industry comprising USAID, other donors, and their implementing partners. Perform due diligence reviews. USAID officials require their implementing partners to carefully review all candidates and to pay particular attention to choosing institutions with strong management skills. Officials from Catholic Relief Services (CRS), a nongovernmental organization that manages a microenterprise activity in Bulgaria, chose their implementing partner based on the partner’s strong management experience. Develop broad outreach. At USAID missions in Peru, Bulgaria, and Egypt, microenterprise activities included provisions for small loans to poor microentrepreneurs with no other affordable credit alternatives. A USAID- supported MFI in Egypt recently initiated a lending program that specifically provides small loans to the poor and is instituting a grant program to help the very poor become eligible for micro loans. In Peru, to target the poor and very poor, USAID chose to implement microenterprise activities in several of the country’s poorest regions. USAID-supported institutions in Bulgaria and Egypt offered financial incentives to loan officers, based in part on the number of loans in their loan portfolio, to encourage them to attract clients. Increase access to services. At implementing organizations in Peru, Bulgaria, and Egypt, loan officers assist poor and very poor clients in filling out the loan applications and attempt to review and approve loan requests within a few days. In addition, because the poor usually lack the collateral needed to qualify for loans, USAID supports collateral substitution activities to attract the poor and very poor who would have no other access to credit. USAID missions in Bulgaria, Egypt, and Peru conduct microenterprise activities that used group lending as a collateral substitute. For individual loans, an implementing organization in Bulgaria require that clients obtain written loan guarantees from acquaintances as a collateral substitute. Adopt an appropriate lending model. Some models, such as group lending or village banking, may be more appropriate than individual lending programs for certain activities or institutions. CRS adopted a group lending model to serve the needs of the poor in Bulgaria. (This model also supports the CRS goal of advancing social and economic justice by serving the poorest.) A study of group lending activities in Africa, Asia, and Latin America indicated that more successful group lending models vary according to the local culture. For example, in South Africa, the South African Get Ahead Foundation adapted the traditional African rotating savings program to create similar group lending activities. Offer an array of services. In addition to credit, services such as savings options and insurance are valuable to clients and, by providing other revenue sources, can assist MFIs in reaching sustainability. In Indonesia, a CRS MFI established a savings program in a village bank for its microenterprise clients. ACCION International, using a group lending model, has each member contribute a minimum amount to a common pool of savings. Establish appropriate pricing policies for services. USAID requires financial institutions that receive microenterprise funding, even those that emphasize lending to the very poor, to charge unsubsidized interest rates. For example, CRS specifies in its microenterprise handbook that the MFIs it manages should charge unsubsidized interest rates. In Bulgaria, Egypt, and Peru, the annual interest rates can be as high as 40 percent annually, although the repayment period is often less than 1 year. Control loan delinquency rates. Loan delinquency rates greater than 10 percent have been found to seriously undermine MFI sustainability. Several MFIs offer financial incentives to their loan officers partially based on the repayment rates of their loan portfolio. MFIs in Peru, Bulgaria, and Egypt use different methods to determine financial incentives to reward their loan officers, but all base the amount of financial incentive on the loan repayment rate of the officer’s portfolio. At one implementing organization in Egypt, loan officers must maintain at least a 97 percent repayment rate on their loan portfolios to be eligible for financial incentives. An MFI in Bulgaria that provides individual loans requires that five people provide guarantees for each loan. The MFI also employs a loan collection officer and an attorney to file in court to collect on delinquent loans. MFIs in Egypt and Bulgaria that focus on poverty lending use a group lending model to provide for prompt loan repayment. Address potential policy constraints. USAID guidance advises missions to consider the local economic environment when designing and implementing microenterprise activities. For example, the guidance advises missions not to provide assistance to MFIs during periods of high inflation. USAID officials told us the agency suspended a microenterprise activity because the Government of Egypt tax policies were too restrictive. Grant agreements may also include a component focused on policy and regulatory reforms to facilitate microenterprise activity. Such reforms may include permitting financial institutions to offer savings to clients, streamlining business registration procedures and assisting microentrepreneurs in registering and obtaining title to their businesses’ assets. A USAID grant agreement in Bulgaria required the implementing organization to coordinate efforts in legislation on policy reform within 3 years. Require transparency and accountability in operations. USAID requires implementing partners and MFIs to report annually on financial and operational performance. In Peru, a USAID-funded technical assistance provider conducts audits of more than 40 MFIs to assess their implementation-related practices. Provide adequate resources to successfully manage a microenterprise activity. Effective management information systems and other assets are necessary for implementing organizations and financial institutions to make decisions, motivate performance, and provide accountability over funds. USAID provides resources to help implementing partners and MFIs improve their management capacity. For example, most USAID grant agreements provide funding to rent office space, to purchase management information systems, including the computers needed to track outstanding loan balances and due dates, and to purchase other equipment such as office furniture. Provide necessary training. Training for implementing organizations and clients in areas such as financial management and computers is often needed to ensure that MFIs manage operations effectively. USAID grant agreements may provide funding for training of implementing organizations’ staff. Implementing organizations, such as an Opportunity International-funded MFI in Colombia, includes weekly training of clients in areas such as marketing and product presentation. A USAID-funded study of five nongovernmental organizations implementing microenterprise activities concluded that heavy investment in training was a factor in the success of village banks. Provide incentives to loan officers. USAID-supported MFIs in Bulgaria and Egypt provide loan officers incentive-based salaries. Criteria for the incentives included the number of clients recruited and the clients’ loan repayment rates. Incentives can double loan officers’ monthly earnings. Require a manageable loan portfolio for loan officers. Implementing organizations in Bulgaria and Egypt, such as CRS and Opportunity International, limit the number of clients one loan officer can manage. The number can vary depending on the ability of the officer, but few manage more than 300 clients. Also, because incentives are based primarily on loan default rates, officers are motivated to limit their pool of clients to a size they can manage effectively. Monitor and evaluate performance. Donors and implementing organizations should monitor the performance of the MFIs they support to ensure they are meeting program goals. These goals can include a focus on the poor and women, as well as financial indicators, such as repayment rates and progress toward sustainability. They should also perform audits on a regular basis to ensure the accuracy of information reported. USAID typically performs midterm and final evaluations of its grant agreements. A cross-country study of village banks in seven countries concluded that oversight of the banks’ operations was a critical factor in their success. Further, USAID requires annual financial reports of implementing organizations. Promote MFI sustainability. This goal supports (1) sound financial practices, (2) expanding and maintaining outreach, and (3) reducing dependency on donor support. USAID and other implementing organizations encourage MFIs to charge unsubsidized interest rates to cover the cost of operations. A USAID-funded study of successful microenterprise activities in Indonesia and Bangladesh concluded that MFIs must have a commitment to, and a plan for, reaching sustainability. We collected, reviewed, and analyzed a set of 22 studies, 20 of which provide an overview of existing research and practice in microenterprise or its components. The purpose of our review was to obtain general information on the primary findings, issues, and debates in microfinance and microenterprise and to complement other USAID-specific components of our data collection efforts. We selected these studies on the basis of three criteria: (1) each study was published in 1998 or later, (2) each was peer reviewed or published by journals or publishers respected in the field, and (3) each was recommended by 2 or more of 15 microfinance experts we consulted. We also included two case studies (published and peer reviewed), because 5 or more of the experts we consulted recommended them. We ensured that the studies selected covered a range of microenterprise subtopics and scientific journals relevant to economic and social development issues. A primary reviewer summarized each study using a data collection instrument developed specifically for the purposes of this review. A secondary reviewer then verified each study summary. To monitor and evaluate its microenterprise portfolios, USAID developed a data collection process and information management system known as the Microenterprise Results Reporting (MRR). The term also refers to an annual report that presents the agency’s financial data—primarily amounts it obligates for microenterprise development—and programmatic information. The MRR data are collected through annual surveys of USAID staff in headquarters and at overseas missions and the institutions that receive USAID funding. A USAID contractor is responsible for data collection and the management information system. Beginning early in each fiscal year, the contractor requests obligations data for microenterprise projects from USAID headquarters and missions. The mission staff report current year obligations and identify the recipient institutions, categorizing them as microfinance, business development services (BDS), or policy services providers. In addition, the mission staff identify institutions that received obligations in previous years for ongoing projects. Separate surveys have been designed for the microfinance institutions (MFI), BDS providers, and policy service providers. The survey for MFIs asks about outstanding loan balances, the number of loans to women, maximum loan sizes, loan loss, loans below the poverty lending threshold percentage, the number of rural clients, savings, and the financial sustainability of the institutions. The survey for BDS providers asks about the types of services provided, the number of clients overall, the number of women clients, the number of rural clients, the number of clients with “poverty loans,” data sources for clients, the clients’ industrial sector, the institutions’ competitors, the demand for BDS, and exit strategies. The policy service providers survey asks about the types of institutions and for descriptions of policy issues covered. The number of respondents to the annual surveys during 1998 to 2001 has remained fairly constant, ranging from 361 to 411. Most MRR respondents complete the MFI survey. In 2000, for example, 512 surveys were sent out; 282 of the 361 respondents completed the MFI survey, 99 completed the BDS survey, and 18 completed the policy survey. The reported response rates rose in recent years, from 56 percent (411 surveys) in 1998 to 84 percent (492 surveys) in 2001. USAID contractor staff analyze the data and, in some cases, apply methodologies the agency has developed to assess whether it has met particular program goals, such as its poverty-lending target. This methodology is designed to weight the individual institutions’ obligations by the amounts of loans that are considered poverty loans. The data and the analyses are presented in the annual reports, which also provide examples of USAID-funded microenterprise projects. In addition to publishing the data in the MRR reports, the contractor also publishes selected data on a Web site accessible to the agency’s missions, institutions that receive USAID-funding, and interested others. The following are GAO’s comments on USAID’s letter dated November 6, 2003. 1. USAID stated that our report does not address the full range and scope of its microenterprise strategy and program. Our report focuses primarily on microfinance, since this component of USAID’s microenterprise program has received, and continues to receive, the bulk of the agency’s funding. Microfinance has also been the principal focus of long-term studies funded by USAID and others. We found no long-term studies or evaluations that assessed the impact of USAID’s support for Business Development Services (BDS) or its policy work in the area of microenterprise development. Our discussions with USAID employees in Peru, Egypt, and Bulgaria regarding BDS and policy initiatives yielded some information on these efforts, but we found that no data on these efforts had been collected systematically. 2. USAID said that it has long used loan size as a proxy for services to the very poor, recognizing that it is imperfect but a statutory requirement. Because of the limitation of loan size as a proxy, we analyzed impact studies and evaluations funded by USAID and others that collected information on borrowers’ economic status to determine the extent to which microfinance has reached the very poor. These studies, based on in-depth research across multiple countries and settings, found that the very poor are rarely reached with micro loans, for reasons outlined in this report (see app. III for a summary of key studies on this topic). To complement information contained in these studies, we discussed this issue at two roundtable meetings with international experts; we also interviewed USAID officials and nongovernmental organization officials working with USAID-funded programs in the countries where we conducted fieldwork. The consensus across the literature and among the experts is that microfinance projects often have difficulty reaching the very poor. 3. USAID said that the MRR has used a single, clear definition of sustainability in questionnaires to implementing partners. We disagree with USAID on this point, and we have added information to this section to clarify our concern regarding the agency’s lack of a standardized method for measuring microfinance institutions’ (MFI) sustainability. As noted in the report, 38 percent of MFIs that received USAID funding in fiscal year 2001 reported that they had achieved financial sustainability. In addition, the figures cited in USAID’s response combine data on operational and financial sustainability, despite the fact that operational sustainability is defined in USAID’s policy guidance as an interim measure toward the goal of achieving full financial sustainability. 4. USAID stated that allocating the impact between USAID and other donors would be impractical and methodologically questionable. However, it also says that it plans to include language in its annual report indicating that many of USAID’s awardees receive support from other sources as well, and that these sources deserve a share of the credit for the awardees’ impacts. 5. See comment 2. USAID states that it made about 2 million loans in fiscal year 2001 that met the statutory standard for service to the very poor. The agency said that it also utilized other methods of reaching this group. As noted in the report, Congress recognized the limitations of loan size as an indicator for targeting and reaching the very poor and directed USAID to develop more accurate methods to ensure that this group is reached in the future. 6. USAID said that the report suggests that loans to the very poor can have negative consequences and may be a significant or widespread problem. As noted in the report, the very poor can benefit from credit, but some evidence suggests that microcredit should compliment, not substitute, for investments in core services, such as health and education. 7. USAID states that their record of supporting MFIs and achieving sustainability is strong. With regard to the issue of MFIs’ achieving full financial or operational sustainability, we note in the report that USAID’s policy establishes full financial sustainability as its goal; that is, to develop fully financially sustainable MFIs, capable of providing services indefinitely without USAID or other donor support. We did not report data on operational sustainability because this measure is defined in USAID’s policy manual as a “useful interim standard of financial performance.” Accordingly, we focused on full sustainability, a standard that, if widely attained, could ensure that these institutions would be available to provide these services in the future. Also, see comment 3. 8. USAID said the report suggests that sustainability might not be consistent with serving very poor clients. Our report does not state or suggest that sustainability might not be consistent with serving very poor clients. We agree with USAID that attaining full financial sustainability may be more difficult for MFIs serving greater numbers of very poor borrowers. 9. USAID incorrectly attributed to us an audit conducted in Egypt; this audit was conducted by the USAID Inspector General. 10. USAID stated that its policy allows microenterprise funds to be obligated for activities that do not meet the definition of microenterprise development found in the MRR and that microenterprise awardees do not have to solely serve micro-scale enterprises. However, our report addresses the reporting of such activities, not the policy. According to the 2001 MRR, “Microenterprises are small, often informally organized businesses that are owned and operated by poor and very poor entrepreneurs. USAID defines a microenterprise as one that comprises 10 or fewer employees, including unpaid family workers, in which the owner/operator of the enterprise…is considered poor. By limiting its definition of microenterprises to those whose owners/operators are poor, USAID ensures that the focus of its efforts remains on the most vulnerable households in higher-risk environments.” Despite this definition, the annual MRR reports present data on a wide variety of activities that do not meet this definition. This includes its policy work, much of its BDS work, its obligations to small and medium businesses, and loans to those that are not poor. As a result, it is uncertain how much of USAID’s funding is going to poor microenterpreneurs. We believe that USAID should be more transparent in reporting these results. In addition, despite USAID’s statement that the MFIs and missions report only activities that meet the definitions of microenterprise as defined in the MRR, we found no evidence of this in our work in three countries or our analysis of MRR data. As noted in this report, we found numerous examples of the missions’ and implementing partners’ reporting activities to the MRR that did not meet the MRR definition. Based on USAID’s comments, we have modified this section of the report to further clarify our position and the basis for these observations. USAID also said it will include more explicit guidance on its website to address the issue. This could potentially improve this aspect of USAID’s reporting. 11. USAID stated that the report focused on a narrow definition of program impacts. This report does not take a narrow view of the impacts of USAID’s microenterprise program. In addition to our assessment of its impact on poverty alleviation and poverty reduction, there are sections focused on reaching the poor and very poor and other services these groups may need; outreach to women; the sustainability of MFIs; the reliability of the MRR; best practices identified by the microenterprise development industry; USAID’s efforts to identify and promote best practices; whether USAID incorporates best practices in their projects; and a synopsis of 22 key studies. In both the body of the report and appendix III, we include considerable discussion of the extent to which microfinance can help alleviate poverty by reducing risk and vulnerability. 12. USAID states that microenterprise development can be a successful intervention to shift from humanitarian to development assistance following conflicts and natural disasters. The USAID policy manual (section II.H.4.), titled “Avoiding Poor Prospects for Microfinance Development,” states that microfinance should not be viewed as a response to alleviate the large-scale human suffering created by wars and civil conflict. It notes that such assistance will inevitably conflict with the basic requirements of building sound financial institutions. Despite this guidance, we found that USAID/Bulgaria used emergency funds provided for the Danube River Initiative to respond to the economic hardship resulting from the Kosovo crisis, providing funding to MFIs that committed to work in this region. Officials of the implementing partner told us that this humanitarian initiative, while important from a social perspective, proved to be financially unsustainable in light of the many challenges refugees faced. Accordingly, the implementing partner terminated its programs in these regions, according to USAID officials in Bulgaria. In addition to the person listed above, Edward George, Jim Strus, Martin De Alteriis, Mona Sehgal, David Dornisch, Yesook Merrill, Valerie Caracelli, and Reid Lowe made key contributions to this report.
Microenterprises--small businesses owned and operated by poor entrepreneurs--have potential to help the world's poorer populations. For this reason, the U.S. Agency for International Development (USAID) included microenterprise development in its programming. In 2001, the agency reported that its was conducting microenterprise projects in 52 countries and had obligated almost $2 billion since 1988 to support its program. The program supports micro loans, among other services, to assist poor entrepreneurs. Since 1996, USAID has annually reported the program's results. To help Congress oversee USAID's management of its microenterprise development program, GAO was asked to (1) determine the extent to which the agency's microfinance activities are meeting the program's key objectives, (2) assess the reliability of USAID's reporting on its overall microenterprise activities, and (3) examine the agency's role in identifying and disseminating microenterprise best practices. USAID's microfinance activities have met some, but not all, of the agency's microenterprise program objectives. These objectives are to (1) reduce poverty among participants; (2) target the poor and very poor; (3) encourage women's participation; and (4) develop sustainable microfinance institutions (MFI). First, regarding reducing poverty--defined as alleviating its impacts or lifting and keeping a large number of people above the poverty line--GAO found that microfinance can help alleviate some impacts of poverty, incrementally improving borrowers' income levels and quality of life and offering an important coping mechanism to poor workers and their families. However, there is little evidence that it can lift and keep many over the poverty line. Second, microfinance generally has served the poor clustered around the poverty line but not the very poor. Third, USAID has successfully encouraged the participation of women, who have comprised about two-thirds of micro loan clients since 1997. Fourth, USAID has emphasized the importance of MFI sustainability. In fiscal 2001, of 294 USAID-supported MFIs that reported on sustainability, 38 percent reported achieving full sustainability--a percentage consistent since 1999. The basic data in USAID's Microenterprise Results Reporting (MRR) system are reliable, but certain methodological problems may affect the accuracy of some of the agency's reporting on key program objectives. Specifically, USAID may not be reporting accurately (1) the amounts it has obligated to microenterprise activities; (2) whether 50 percent of its resources went to the very poor, as required by Congress; and (3) the sustainability of USAID-supported institutions. Further, although the agency reports annually on the activities of institutions it supports, it does not show the percentage of those institutions' total funding that its contribution represents. USAID has identified and disseminated microenterprise best practices, providing information to its missions and implementing partners through policy guidance, training, and technical assistance. In addition, USAID has collaborated with microenterprise development provider networks and others to publish information about these practices.
FSA was established in 1994 during the reorganization of the Department of Agriculture and operates through a network of field offices located across the United States. The agency provides a variety of services, including providing financial assistance to new or disadvantaged farmers and ranchers who are unable to obtain commercial credit at reasonable rates and terms. FSA loans available to farmers and ranchers include direct or guaranteed ownership loans and direct or guaranteed operating loans. Direct ownership loans are for buying farm real estate and making capital improvements. Direct operating loans, which are made to beginning farmers and ranchers who are unable to qualify for guaranteed operating loans, are for the purchase of items to help daily farm operations. Guaranteed farm loan program loans are for the same purposes as direct farm loan program loans, but they are made by private third-party lenders and are guaranteed by FSA for up to 95 percent of the principal loan amount. Our objectives were to determine whether (1) FSA was promptly referring eligible farm loan program loans to FMS for collection action, (2) any obstacles were hampering FSA from referring farm loan program loans to FMS, and (3) FSA was appropriately using exclusions from referral requirements. To address these objectives, we interviewed officials from FSA to obtain an understanding of the FSA referral process and any obstacles that were hampering the referral of eligible debts. We reviewed FSA’s policies and procedures on debt referrals and examined the agency’s current and planned efforts to refer eligible delinquent debts. We obtained and analyzed the TROR for the fourth quarter of fiscal year 2000, which was the most recent year-end report available at the completion of our fieldwork, and other financial reports prepared by FSA, and held discussions with FSA officials to determine whether the agency was appropriately using exclusions from referral requirements. In addition, we reviewed responses to questions about FSA’s debt collection practices that you submitted to the deputy secretary of agriculture in October 2001 and used information from the responses to clarify or augment our report, where appropriate. To determine whether FSA’s use of exclusions from referral requirements was appropriate, we used statistical sampling techniques to select 15 FSA field offices from the four states with the highest dollar amounts of reported debt excluded from TOP as of September 30, 2000. Using electronic and hard-copy files obtained from Agriculture, we reviewed all 263 loans from the 15 selected offices that were more than 180 days delinquent and had been reported as excluded from referral to FMS as of September 30, 2000, for bankruptcy, forbearance/appeals, foreclosure, and DOJ litigation. (Appendix I contains additional information on the sampling method and the results.) Based on the results of our review, we estimated the percentage of loans inappropriately excluded as of September 30, 2000, in the four states from which the sample offices were drawn. Because we found numerous errors in the exclusion categories we tested, we did not test other reported exclusions from referral to FMS for cross-servicing, such as internal offset. We did not review FSA’s process for identifying and referring debts to Treasury for cross-servicing because the agency had suspended all such referrals in April 2000 pending development of guidelines to implement a new referral policy. FSA issued the new guidelines in July 2001 and, according to an Agriculture official, the first referral to FMS under this new policy was made in September 2001. We did not review implementation of FSA’s new guidelines, since the procedures were implemented near the completion of our fieldwork. We conducted our review from November 2000 through October 2001 in accordance with U.S. generally accepted government auditing standards. We did not independently verify the reliability of certain information that FSA provided to us, such as debts more than 180 days delinquent and debts classified as currently not collectible (CNC) and information in FSA’s loan- accounting and loan-servicing systems. We requested written comments on a draft of this report from the secretary of agriculture or her designated representative. The written response from the administrator of FSA is reprinted in appendix II. As of September 30, 2000, FSA reported having about $8.7 billion in direct farm loan program loans. As shown in table 1, the agency reported about $1.7 billion of direct farm loan program loans more than 180 days delinquent, including debts classified as CNC, as of September 30, 2000. Of this amount, FSA reported referring about $934 million to TOP and excluding about $732 million from referral to TOP. FSA reported that it had referred only $38 million of loans to FMS for cross-servicing as of September 30, 2000. It is FSA’s policy to refer delinquent loans for cross- servicing only if collateral has been liquidated and a deficiency remains. In addition, as discussed in more detail later in this report, FSA suspended cross-servicing referrals from April 2000 until September 2001 while it developed and implemented a new cross-servicing referral policy. Since DCIA’s enactment, several obstacles have impeded FSA’s implementation of the act’s referral requirements. Loan system limitations have resulted in the automatic exclusion of certain types of debts without any review for eligibility and the inability to pursue collection from codebtors through TOP. FSA’s failure to ensure that field offices routinely updated the status of delinquent loans has led to inappropriate exclusions from referral and inaccurate reporting of delinquent and eligible debt amounts to Treasury. A change in referral policy led to a suspension of all delinquent loan referrals to FMS for cross-servicing. FSA’s policy of referring delinquent debt to FMS only once a year resulted in delayed referrals and may have reduced collections. Finally, FSA did not take action until recently to recognize losses on guaranteed farm loan program loans as nontax federal debt. According to FSA, until certain steps, such as software implementation, are completed, FSA cannot use the collection tools provided under DCIA to pursue collection directly from debtors on guaranteed farm loan program loans. Of the $694 million of debt reported by FSA as excluded from referral for bankruptcy, forbearance/appeals, foreclosure, and DOJ litigation, about $295 million consists of judgment debts, including deficiency judgments, which are court judgments requiring payment of a sum certain to the United States. According to FSA officials, deficiency judgments—unlike some other types of judgment debts—are eligible for TOP and should be referred to FMS. However, FSA’s Finance Office in St. Louis automatically excluded all judgment debts for direct farm loan program loans from referral to FMS because of automated system limitations. Although the system does contain information indicating which debts are judgment debts, it cannot currently accommodate information on subcategories of judgment debts. Therefore, FSA staff cannot use the agency’s automated system to identify deficiency judgments for referral. On account of our inquiries, FSA officials initiated a special project in May 2001 to manually identify all deficiency judgment debts for direct farm loan program loans so that such debts could be referred to FMS. Even though FSA reported having referred $934 million of direct farm loan program loans to FMS for TOP as of September 30, 2000, the agency has lost and continues to lose opportunities to maximize collections on these loans because it does not report information on codebtors to FMS. According to FSA officials, the vast majority of direct farm loan program loans have codebtors, who are also liable for loan repayment, but FSA’s automated loan system cannot record more than one taxpayer identification number for each loan. Because taxpayer identification numbers are required for referrals to FMS for TOP, FSA cannot refer codebtors on farm loan program loans to FMS. An FSA official said that the agency first recognized the need to have codebtor information in the system in 1986 to facilitate debt collection but that higher-priority systems projects have precluded FSA from completing the necessary enhancements to allow the system to accept more than one taxpayer identification number per debt. FSA was planning to incorporate this modification in the new Farm Loan Program Information System scheduled for implementation in fiscal year 2005, but during the December 5, 2001, testimony before your subcommittee, the agency committed to make the change by December 2002. FSA field offices across the country make determinations as to whether direct farm loan program loans are in bankruptcy, forbearance/appeals, or foreclosure and therefore should be excluded from referral to FMS. The status of these loans changes over time, and information on the loans must be updated as changes occur if exclusion determinations are to be continuously accurate. Our review of selected excluded loans indicated that personnel in the FSA field offices we visited did not routinely update the eligibility status of farm loan program loans in FSA’s Program Loan Accounting System. Without up-to-date information on loan status, the system cannot accurately identify which loans are eligible for referral. One of the most frequently identified inappropriate exclusions pertained to amounts that had been discharged in bankruptcy, which should not have been included in delinquent debt. Farm loan managers in some of the FSA field offices we visited said they had not closed out many direct farm loan program loans discharged in bankruptcy because making new loans has been a higher-priority use of their resources. In addition, FSA did not provide sufficient oversight to help ensure that field office personnel adequately tracked the status of discharged bankruptcies and updated the loan files and debt records in the Program Loan Accounting System. Delays in promptly closing out discharged bankruptcy debts not only distort the TROR for debt management and credit policy purposes, but also distort key financial indicators such as receivables, total delinquencies, and loan loss data. The information is therefore misleading for budget and management decisions and oversight. Aside from erroneously inflating reported loans receivable and delinquent loan amounts, failure to process closed-out debts delays the agency’s reporting of those amounts to the Internal Revenue Service as income to the debtor. FSA suspended cross-servicing referrals in April 2000 pending development of guidelines implementing a new policy to refer only debts less than 6 years delinquent to FMS for cross-servicing. According to agency officials, FSA adopted the new policy in response to discussions they had with Agriculture’s Office of the General Counsel that addressed a conflict between Farm Loan Program regulations and FMS policy. These officials stated that the Office of the General Counsel decided that FSA must adhere to Farm Loan Program regulations, which specify a 6-year delinquency limit for cross-servicing referrals, despite the fact that, according to FMS officials, FMS accepts debts for cross-servicing that are more than 6 years delinquent. In July 2001, FSA issued revised guidelines to implement the new policy and is now reviewing loans at more than a thousand FSA field offices to determine the loans’ eligibility for referral under the new policy. According to an Agriculture official, FSA made the first referral under the new policy in September 2001. Agency officials told us they eventually plan to make cross-servicing referrals quarterly but will refer delinquent loans more frequently until the backlog resulting from the referral suspension is cleared. According to data provided by FSA officials, about $400 million of new delinquent debt became eligible for TOP during calendar year 2000. FSA officials stated that the debts became eligible relatively evenly throughout the year, but the agency refers debts eligible for TOP only once annually, during December. Consequently, a large portion of the $400 million of debt likely was not promptly referred when it became eligible. As we have previously testified, industry statistics have shown that the likelihood of recovering amounts owed on delinquent debt decreases dramatically as the age of the debt increases. Thus, the old adage that “time is money” is very relevant for referrals of debts to FMS for collection action. FSA officials told us that the agency agrees that quarterly referrals could enhance collection of delinquent debts and is working on automated system modifications to refer debts quarterly to TOP. FSA plans to have a quarterly referral process ready for implementation in August 2002. Guaranteed farm loan program loans—as well as related losses—have been significant since the enactment of DCIA in 1996. The outstanding principal due on guaranteed farm loan program loans was about $8 billion as of September 30, 2000; as of that date, FSA had paid out about $293 million in losses on guaranteed farm loan program loans since fiscal year 1996. Since DCIA’s enactment, FSA has referred none of its losses on guaranteed farm loan program loans to FMS for collection action. According to FSA officials, the agency could not pursue recovery from guaranteed farm loan program debtors or use DCIA debt collection tools because under the guaranteed farm loan program, no contract existed between these debtors and FSA. As a result, the agency did not recognize the losses that it paid to guaranteed lenders as federal debt and did not apply DCIA debt collection remedies to them. In June 2000, Agriculture’s Office of Inspector General reported that FSA was not referring its losses on guaranteed farm loan program loans to FMS for collection and identified the need for FSA to recognize the losses as federal debts and begin referring them to FMS for collection action. However, as of September 30, 2000, FSA still had no policies and procedures to recognize losses on guaranteed farm loan program loans as federal debts and to refer such debts to FMS for TOP and cross-servicing. As a result, FSA has missed opportunities to collect millions of dollars that the agency has paid to lenders to cover guaranteed losses. FSA officials told us that the agency has revised the loan application forms applicable to guaranteed loans made after July 20, 2001, to include a section specifying that amounts FSA pays to a lender as a result of a loss on a guaranteed loan constitute a federal debt. FSA expects that software needed to implement the revisions to the Guaranteed Loan Accounting System should be completed around mid-2002 and in place before any loss claims are paid on guaranteed loans made after July 20, 2001. As of September 30, 2000, FSA had excluded $732 million of delinquent loans from referral to FMS for TOP. FSA cited bankruptcy, forbearance/appeals, foreclosure, and DOJ litigation as the reasons for about $694 million, or 95 percent, of these exclusions. About $295 million of the exclusions were judgment debts. As we noted earlier, FSA excluded all judgment debts from referral because of automated system limitations, despite the fact that deficiency judgment debts are eligible for referral. We also noted that we found exclusion errors caused by FSA’s failure to ensure that loan status was routinely updated. As a result of inappropriate exclusions and exclusion errors, FSA failed to maximize its collection of delinquent loans and provided inaccurate TROR data to federal agencies that rely on such information for policy and oversight purposes. Using statistical sampling, we selected 15 FSA field offices in California, Louisiana, Oklahoma, and Texas—the four states with the highest dollar amounts of debt excluded from TOP. We reviewed supporting documents for all 263 loans from these offices that were more than 180 days delinquent and had been excluded from referral to FMS as of September 30, 2000, to determine the extent to which exclusions in the four states were consistent with established criteria for excluding loans in bankruptcy, forbearance/appeals, foreclosure, and DOJ litigation. Based on the results of our review, we estimate that as of September 30, 2000, FSA had inappropriately placed about 575 loans, or approximately half the excluded loans in the four selected states, in exclusion categories. As part of our sample, we reviewed supporting documents for 52 bankruptcies that had been discharged before September 30, 2000. In fact, many had been discharged several years before that date. For example, one loan with a balance due of about $325,000 was reported as more than 180 days delinquent and had been excluded from referral because of bankruptcy. Our review of the loan file at the FSA field office showed that a bankruptcy court had discharged the debt in 1986. Therefore, the debt should not have been included in either the delinquent debt amount or exclusion amount reported to Treasury as of September 30, 2000. Because of the large number of errors we found in the bankruptcy, forbearance/appeals, foreclosure, and DOJ litigation exclusion categories, we did not test other reported exclusions from referral to FMS for cross- servicing, such as loans being internally offset. Although DCIA was enacted in 1996, FSA continues to face major obstacles to complying fully with the act. FSA lacks sufficient processes and controls to adequately identify and promptly refer all direct farm loan program loans eligible for referral to FMS. Automated system limitations, which have existed for years and have delayed FSA’s compliance with the act, have still not been corrected, even though they have prevented referral and potential collection of substantial amounts of eligible delinquent debt. The failure of FSA field offices to routinely update delinquent loan information has led to erroneous exclusions from referral and inaccurate reporting of debt to Treasury. FSA’s policy of referring debts to TOP only once a year has allowed debts to age unnecessarily and has likely reduced their collectibility. FSA has only recently taken action to establish procedures to refer losses on guaranteed loans to FMS; therefore, opportunities to collect on losses of about $300 million since DCIA was enacted may have already been lost. If FSA is to make significant progress in collecting on millions of dollars of delinquent farm loan program loans, the agency must give higher priority to fully complying with the debt collection provisions of DCIA. To improve FSA’s compliance with DCIA, we recommend that the secretary of agriculture direct the administrator of FSA to take the following actions: Develop and implement automated system enhancements to make the Program Loan Accounting System capable of identifying all judgment debts eligible for referral to FMS for collection action. In the interim, continue with the manual project to identify judgment debts eligible for referral to FMS. Monitor planned system enhancements to the Program Loan Accounting System to ensure that capacity to record and use codebtor information is available and implemented by December 2002. Develop and implement oversight procedures to ensure that FSA field offices timely and routinely update the Program Loan Accounting System to accurately reflect the status of delinquent debts. Aside from requirements for database integrity, this is critical to determining allowable collection action, including whether debts are eligible for referral to FMS for collection action. Develop and implement oversight procedures to ensure that all debts discharged through bankruptcy are promptly closed out and reported to the Internal Revenue Service as income to the debtor in accordance with the Federal Claims Collection Standards and Office of Management and Budget Circular A-129. Monitor effective completion of the planned automated system modifications to refer eligible debt to TOP on a quarterly, rather than annual, basis by August 2002. Monitor planned system enhancements to the Guaranteed Loan Accounting System to ensure that the software is completed that is needed to implement the revisions to the loan application forms to establish guaranteed loan losses as federal debt. Once guaranteed loan losses are established as federal debt and are deemed eligible for referral to FMS, timely refer such debt to FMS for collection action in accordance with DCIA. In written comments on a draft of this report, the administrator of FSA generally agreed with our findings and recommendations. The administrator stated that FSA has developed an aggressive action plan to implement the remaining DCIA provisions mentioned in our report by December 31, 2002. FSA’s letter is reprinted in appendix II. While FSA agreed with our finding that it had inappropriately placed several loans in various exclusion categories allowed by DCIA, it disagreed with our estimated error rate of about 50 percent in the sample population of 1,187 loans. FSA stated that its own internal review of 967 loans in the four states that were included in our review resulted in an error rate of 35.7 percent. Our sample was statistically selected and resulted in a valid projected error rate of about 50 percent for the states covered by our test work. To substantiate our work for each error identified during our testing, we asked FSA farm loan managers to sign a statement as to whether they agreed with the GAO sample results and conclusion that the exclusion was inappropriate. In all but 3 of the 113 errors we identified, the managers agreed with our conclusions and, as a result, said they planned to take action to correct the errors. Since the FSA review was performed subsequent to our tests, we cannot comment on the validity of FSA’s internal assessment of the reported results. In addition, since many of the loans in our sample had been inappropriately excluded for years, corrections made subsequent to our testing but prior to FSA’s review would likely have resulted in a lower error rate at the time of FSA’s work. In any case, it is important to note that the 35.7 percent error rate cited by FSA from its internal assessment is still unacceptable, and we remain firm in our recommendation that FSA develop and implement oversight procedures to ensure that FSA field offices timely and routinely update the Program Loan Accounting System to accurately reflect the status of delinquent debts. FSA also took issue with our report’s reference to possible missed collection opportunities. It stated we had not given FSA sufficient credit for collections totaling millions of dollars of delinquent debt using various collection tools. Our point is that FSA’s mentioned successes could have been much greater had it made DCIA a higher priority and thus implemented certain key provisions much sooner. Our position remains unchanged. The details in the body of our report demonstrate lack of adequate progress. Most important, 5 years after the passage of DCIA, FSA had not yet established an adequate framework or systems capacity to effectively carry out its responsibilities for collecting large sums of delinquent debt. As agreed with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies to the chairmen and ranking minority members of the Senate Committee on Governmental Affairs and the House Committee on Government Reform and to the ranking minority member of your subcommittee. We will also provide copies to the secretary of agriculture, the inspector general of the Department of Agriculture, the administrator of the Farm Service Agency, and the secretary of the treasury. We will then make copies available to others upon request. If you have any questions about this report, please contact me at (202) 512-3406 or Kenneth R. Rupar, assistant director, at (214) 777-5714. Key contributors to this report are listed in appendix III. We first identified the four states (Texas, California, Louisiana, and Oklahoma) with the highest dollar amounts of debt excluded from TOP. From the four states, we drew a multistage cluster sample of 15 field offices (population 123) using probability proportionate to size, a sampling method in which larger clusters (in this case, offices) have a higher probability of being selected than smaller clusters. Our debt population consisted of all FSA debt more than 180 days delinquent that had been excluded from referral to Treasury as of September 30, 2000. We reviewed all excluded debt (263) at the 15 sample offices. Table 2 identifies the four states selected, the number of offices selected in each state, the number of excluded debts at the selected offices in each state, and the number of errors found at the selected offices in each state. Based on our review, we estimate that 48.5 percent ± 15.7 percent of the population were inappropriately excluded from Treasury referral. When projecting these errors to the population of 1,187, we are 95 percent confident that the errors in the population are from 389 to 761 debts. Table 3 shows the two-stage probability proportionate to size cluster sample results. Other key contributors to this report were Arthur W. Brouk, Sharon O. Byrd, Richard T. Cambosos, Michael D. Chambless, Michael S. LaForge, and Gladys E. Toro. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily e-mail alert for newly released products” under the GAO Reports heading. Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: fraudnet@gao.gov, or 1-800-424-5454 or (202) 512-7470 (automated answering system).
The Debt Collection Improvement Act of 1996 seeks to maximize the collection of billions of dollars of nontax delinquent debt owed to the federal government. The act requires agencies to refer eligible debts delinquent more than 180 days to the Department of the Treasury for payment offset and to Treasury or a Treasury-designated debt collection center for cross-servicing. The Treasury Offset Program includes the offset of benefit payments, vendor payments, and tax refunds. Cross-servicing involves locating debtors, issuing demand letters, and referring debts to private collection agencies. The Farm Service Agency (FSA) has initiatives to ensure the timely referral of all delinquent debt. However, the agency's failure to make the act a priority has left key provisions of the legislation unimplemented and has severely reduced opportunities for collection. FSA lacks effective procedures and controls to identify and promptly refer eligible delinquent debts to Treasury for collection action. GAO identified several obstacles to FSA's establishment and implementation of an effective and complete debt-referral process. In the four states with the highest dollar amounts of federal debt excluded from the Treasury Offset Program, GAO reviewed FSA's use of exclusions from referral requirements because of bankruptcy, forbearance/appeals, foreclosure and Department of Justice litigation. GAO found that about half of the exclusions in these states were inconsistent with established criteria.
Under the U.S. export control system, agencies expect companies to be responsible for determining if the items or information they intend to export are controlled by the government’s export control regulations and for implementing procedures to safeguard their protection and transfer. The corresponding regulations are designed to keep specific military and dual-use items and technologies from being diverted to improper end users. These export control regulations, initially established more than 30- years ago, aim to balance national security, foreign policy, and economic interests. In today’s global economy, U.S. companies’ exchanges of technology and information occur with ease and include the transfer of export-controlled technologies to foreign nationals through routine business practices such as transmission of a data file via an e-mail sent from a laptop computer, cell phone, or a personal digital assistant, using company electronic networks to make intra-company transfers of information to overseas subsidiaries or affiliates, visual inspection of U.S. equipment and facilities during company site e-commerce transactions—sales of software over the Internet to oral exchanges of information when working side-by-side with U.S. citizens. See figure 1 for an illustration of various types of exchanges of export- controlled information in relation to the export of goods. While an export often involves the actual shipment of goods or technology out of the U.S., under Commerce’s and State’s export control regulations, transfers of U.S. export-controlled information to foreign nationals within the U.S. are also considered to be an export to the home country of the foreign national and thus may require an export license. For export control purposes, the term “foreign national” includes any person who is not a U.S. citizen or lawful permanent resident. The U.S. government’s controls on the export of defense-related items are primarily divided between the departments of Commerce and State, with the assistance of the Department of Defense (DOD). Department of Commerce: Commerce, through its Bureau of Industry and Security (BIS), controls the export of dual-use items and information primarily through implementation of the Export Administration Act. Commerce’s Export Administration Regulations (EAR) establish the Commerce Control List, which generally contains detailed controls for dual-use items. BIS has two branches: Export Administration and Export Enforcement. Export Administration is responsible for processing export license applications, outreach, and counseling efforts to help ensure exporters’ compliance with the EAR as well as monitoring certain license conditions to determine exporters’ compliance with their conditions. Export Enforcement investigates alleged dual-use export control violations and coordinates its enforcement activities with other federal agencies, such as the Department of Justice’s Federal Bureau of Investigations (FBI) and the Department of Homeland Security’s Customs and Border Protection (CBP). Department of State: State, through its Directorate of Defense Trade Controls (DDTC), regulates exports of defense items and information under the authority of the Arms Export Control Act. State’s International Traffic in Arms Regulations (ITAR) provides controls over defense articles and services, which are identified in broad categories on the U.S. Munitions List (USML). DDTC works to implement and enforce these laws and regulations using three key offices: Licensing, Compliance, and Policy. The Office of Licensing is responsible for reviewing license applications and addressing correspondence from exporters, such as providing advice on questions to businesses, known as advisory opinions. The Office of Compliance checks for company violations of the export regulations and conducts end-use checks on exports and company visits to achieve this goal. The Policy Office provides training through a third party organization, and outreach to companies on the export regulations. DOD: The Defense Technology Security Administration (DTSA) represents DOD on export control issues and administers development and implementation of technology security policies for the international transfers of defense-related goods, services and technologies, which DOD oversees. DTSA serves an advisory role in State’s and Commerce’s export license review processes and offers technical reviews on licenses for national security concerns. DTSA may also provide guidance regarding commodity jurisdiction requests from State, and DTSA often issues advice regarding advisory opinions submitted to both State and Commerce. The agency is responsible for maintaining contact with industry regarding changes in technologies and licensing initiatives. DTSA plays a significant role in coordinating any proposed changes to the ITAR or EAR, with DTSA’s opinion serving as the final DOD position regarding such matters. Recent congressional hearings and intelligence reports have highlighted threats to U.S. companies’ sensitive information—such as intellectual property, trade secrets, and financial data—from foreign economic and military surveillance and the associated challenges of balancing U.S. security and economic interests. These threats may weaken U.S. military capability and hinder U.S. industry’s competitive position in the world marketplace. According to a recent counterintelligence estimate, factors that have contributed to U.S. economic and technological success have also facilitated foreign entities’ technology acquisition efforts. For example, the openness of the United States has provided foreign entities easy access to sophisticated technologies; new electronic devices have vastly simplified the potential for illegal retrieval, storage, and transportation of massive amounts of information, including trade secrets and proprietary data; and information systems that create, store, process, and transmit sensitive information have become increasingly vulnerable to hacking attempts. The challenges to the government in protecting export-controlled information at companies are interrelated to the challenges we previously reported facing the departments of Commerce, State, and Defense in overseeing the export of controlled technologies in today’s rapidly evolving international security and business environments. For example, in June 2006, we reported Commerce has not systematically evaluated the overall effectiveness and efficiency of its dual-use export control processes to determine whether it is meeting its goal of protecting U.S. national security and economic interests in the wake of the September 2001 terror attacks. In 2005, we reported that State has not made significant changes to its arms export control regulations in response to the terror attacks. U.S. government export control agencies have less oversight on exports of controlled information than they do on exports of controlled goods. Commerce’s and State’s export control requirements and processes—such as export documentation, reporting requirements, and monitoring— provide physical checkpoints on the means and methods companies use to export controlled goods to help them ensure such exports are made under their license terms, but the agencies cannot easily apply these same requirements and processes to exports of controlled information. Consequently, U.S. export control agencies rely on individual companies to develop practices for the protection of export-controlled information. Officials from one third of the companies we interviewed told us they do not have internal control plans to protect their export-controlled information. Government export control processes provide physical checkpoints for the export of goods, but the same checkpoints are not easily applied to electronic and other intangible transfers of export-controlled information. Both Commerce and State oversee exports of goods and information— regardless of their form or method of transfer—through their licensing and compliance programs. Both agencies’ programs require companies to apply for export licenses under their respective regulations and to keep records on such exports for possible agency monitoring and inspection. However, certain export documentation, agency reporting requirements, and agency monitoring processes for exports of controlled goods are not easy or practical to apply to the oversight of exports of information, which limits the agencies’ ability to monitor exports of licensed controlled information. Means of Transportation or Transfer Reported on Export Documentation: When shipping a controlled good overseas, a company is generally required to file a Census Bureau Shippers’ Export Declaration (SED) form with CBP, within the Department of Homeland Security.Companies generally are required to file the SED form for every export made under a specific license, which requires companies to specify the method of transportation for the exported goods, such as vessel or air. However, exports of controlled information transmitted electronically or in an otherwise intangible form are specifically exempted from SED filing. Commerce and State export license applications require exporting companies to report the name of the freight forwarder or other agents to be used for the shipment of goods, which provides the agencies with some oversight on how companies intend to conduct such exports. However, agency export license applications do not require companies to report information on the means of transmission they intend to use to transfer export-controlled information. In the absence of information on the means of transmission used to export-controlled information, Commerce and State lack information that could help provide some level of oversight as they do for physical shipment of goods. Agency Reporting Requirements: Certain agency reporting requirements for goods do not apply to export-controlled information. Companies are generally required to present the SED form before any export. As previously described, the SED Form is not required for electronically transmitted export-controlled information. Further, companies are not otherwise required to notify Commerce when exports of licensed controlled information take place. While in certain circumstances State requires companies to notify it when they transmit licensed export-controlled information, this requirement only applies to the first instance of transfer. Beyond these notifications, Commerce and State cannot be sure that all exports of controlled information under the license are made to the designated end-user and are within the terms of the license approval. Agency Monitoring: Commerce and State monitor exports to help ensure company compliance with license requirements and to assess industry areas where export licenses may be required. However, the two agencies’ efforts focus on export-controlled goods, and not information, due in part to the nature of transfers of export-controlled information, which makes elements of agency monitoring processes inapplicable. For goods, the SED can be used to aid the government in tracking exported goods and determining whether or not they reach the specified end-user. The SED also provides a feedback mechanism, which the lead export-control agencies may use to measure the effectiveness of their activities and processes. A similar feedback mechanism does not exist for export-controlled information transmitted electronically and by other intangible methods. Since the agencies cannot completely monitor these exports, their reliance on companies to implement control mechanisms becomes increasingly important for protecting export-controlled information. For example, Commerce and State do not systematically monitor whether companies abide by the conditions of their “deemed” export licenses, which permit the transfer of export-controlled information to specific foreign nationals. Consequently, agencies have no way of knowing if all licensed export-controlled information was exported according to the terms of the license—for example, if it was sent within the permitted time period, if the information exported was appropriate, and if the export reached its intended end-user. In 2002, we recommended that Commerce—in consultation with the Secretaries of Defense, State, and Energy—establish a risk-based program to monitor compliance with deemed export license conditions. Commerce officials told us they recently completed a limited pilot program to monitor company compliance with deemed exports and did not find any compliance issues in the sample of deemed export licenses they reviewed. However, Commerce officials told us that this pilot did not address the issue of export-controlled information transferred by electronic means, such as e-mail, and that they have not decided whether they will perform similar monitoring efforts on an annual basis. Table 1 provides an overview of the key agency checkpoints generally related to export-controlled goods and information. Under the U.S. export control system, companies are responsible for implementing procedures to protect export-controlled information regardless of how it is exported. We found a range of company practices for protecting export-controlled information from our discussions with officials from 46 companies, including the use of internal control plans, limiting employee access, and computer security technologies. Almost two thirds of the company officials we interviewed told us their companies use internal control plans, which establish procedures to protect proprietary and export-controlled information and also set requirements for access to such material by foreign employees and visitors. However, other companies we interviewed exported controlled information or employed foreign nationals, but had not yet developed internal control plans for such transactions. While Commerce and State generally do not require companies that export controlled information to use such plans, an industry report on export control best practices includes internal control plans as a best practice to safeguard export-controlled products and technologies against improper access by foreign nationals—employees, customers, and visitors. For example, companies can use such internal control plans to provide specific procedures and processes addressing physical and computer access to export-controlled information; such as employee badging, record-keeping procedures for all relevant export- related documents; the use of internal audits on export transactions; and the use of electronic surveillance, such as hidden cameras, where appropriate, for physical security. Almost half of the company officials we interviewed told us they encounter uncertainties when determining what measures should be included within their internal control plans to help ensure the proper protection of export-controlled information. Officials from larger companies who expressed such concerns added that these uncertainties may be magnified in smaller companies due to their inexperience with export regulations, a point confirmed by officials from five small companies we interviewed. In addition to the companies’ stated use of internal control plans, we found companies also had practices related to employee access and foreign national access to export-controlled information. Examples include the following: Two thirds of the companies indicated that all employees—including foreign nationals—wear identification badges that contain information such as a picture, a color-code indicating the employee’s security clearance, and encoded data that allows access to only those areas authorized for the employee. About three fifths of the companies we interviewed indicated that they protect export-controlled information by storing it within restricted components of the company’s computer server, and requiring employees to gain permission through a network administrator before obtaining access to such information. Some companies also use information security protections for their electronic transfers of export-controlled information. More than two fifths of the companies we interviewed use encryption; an information technology process used to obscure data files, making them inaccessible without the appropriate code to decipher the meaning. Neither Commerce’s nor State’s regulations require companies to use encryption when transferring export-controlled information. According to the International Standards Organization, a nongovernmental organization that provides technical standards to the public and private sectors, organizations should consider using some form of encryption when transferring sensitive information. Commerce and State export control officials told us they do not specifically recommend that companies use encryption for various reasons, such as agencies’ inability to keep current on rapid developments in this field and possible liability issues surrounding their recommendation of a particular encryption product for e-mail security. Our review of selected companies’ export control internal control practices highlights how uneven company practices can contribute to vulnerabilities associated with the protection of export-controlled information. For example, officials from three of the companies we interviewed told us that they exported controlled information—through electronic transmissions or interpersonal interactions with foreign nationals—but that they did not have technology control plans that provided company-wide policies and procedures to limit their foreign national employees’ access to export-controlled information. However, in situations when companies manufacture or research sensitive technologies that are export-controlled, they are required to register with the government, even if they are not planning to export. In situations including these, the extent of company internal control practices could affect its vulnerability. For example, a nanotechnology company official intending to export technology in the immediate future told us a former Chinese foreign national employee had full electronic access to the same sensitive company information as its U.S. employees. The official also told us this foreign employee was not physically segregated from any portions of the company facilities or lab where more sensitive technology functions were performed. Under these circumstances, we believe that the company official could not have determined whether the employee improperly accessed company information that potentially could be export-controlled. The lead government agencies have not fully assessed the risks of protecting export-controlled information to help identify the minimal level of protection for such exports. Commerce and State do not strategically use existing resources, such as export license data, to identify potential risks when such information is exported and are not fully aware of the consequences of companies using a variety of measures for protecting export-controlled information. Such analysis is critical because government export-control processes provide less oversight for export- controlled information than exports of goods. Improved knowledge of the risks associated with such exports could improve agency outreach and training efforts, which now offer limited assistance to companies to mitigate risks when protecting such information. Commerce and State have not strategically used existing information resources, such as export license data, to identify possible vulnerabilities and risks related to company protection of export-controlled information for use in oversight of such exports. GAO has identified managing risk both as an emerging area of high risk for the government and a part of governance challenges for the 21st century. Commerce and State do collect a range of basic information on company exports, some of which could prove valuable in understanding export- controlled information, such as technologies exported and their end-users. However, neither Commerce nor State has implemented systematic risk- assessment practices for its oversight of export-controlled information. Applying systematic risk-based strategies to export-controlled information could enable Commerce and State officials to focus their resources on information exports that may pose a higher risk to national security. As shown in figure 2, risk management aims to integrate systematic concern for risk into the usual cycle of agency decision-making and implementation. Threat, vulnerability, and criticality are frequently used aspects of risk assessment. Our internal control standards state that once risks have been identified, they should be analyzed for their possible effects. Our standards also state that because economic and industry conditions continually change, entities should provide mechanisms to identify and deal with any special risks prompted by such changes. Risk analysis generally includes estimating the risk’s significance, assessing the likelihood of its occurrence, and deciding how to manage the risk and what actions should be taken. The threats to the protection and transfer of export-controlled information include the inadvertent exposure of such information to unauthorized foreign parties as well as foreign economic espionage. For example, several of the larger defense and commercial companies we interviewed told us their computer networks are routinely subject to hacking attempts by individuals attempting to steal or corrupt information, which officials said can number in the hundreds daily. Currently, Commerce and State rely on companies to identify and protect export-controlled information whether it is transferred orally, electronically, or visually—or through traditional physical shipment methods used for goods, such as a courier transporting a compact disk containing export-controlled information to a customer. The vulnerability of export-controlled information may be increased by companies not using computer or physical security mechanisms that help protect against physical and electronic diversions during its transmission. The consequences of such risks to export-controlled information may include the loss of sensitive information to foreign entities with interests contrary to our own as well as significant and costly civil and criminal penalties for violations of the export control regulations. At present, both agencies’ approaches to conducting company compliance visits generally target specific industries and industry practices, but are not based on thorough knowledge of possible weaknesses and vulnerabilities in company protection of export-controlled information. Commerce officials told us the agency primarily conducts company visits based on company size and technology produced. Commerce officials also told us they also target companies and industry associations based on a variety of other factors, including their analysis of license data and publicized company export control developments, such as announcements in local business newsletters reviewed by Commerce export officials. Through its company visit plan, State performs its company compliance visits based on general knowledge of topic areas its staff believe may be vulnerable to compliance problems and discrete compliance issues, such as companies that employ foreign nationals. However, Commerce and State do not use available licensing data to strategically target both established and emerging business sectors to aid in their monitoring and oversight of exports of controlled information. For example, agency license databases and company records provide a pool of information, which Commerce and State could analyze to help them discern trends in export-controlled information, such as identifying which companies are involved in cutting-edge commercial and military technology developments. Increased agency knowledge in these technology fields that transmit export-controlled information and are known to be subject to foreign espionage would help increase agency oversight and may reduce such vulnerabilities. State and Commerce told us they perform company outreach and training visits as part of their oversight of company export control activities, but neither agency considers export-controlled information in determining which companies they should visit. For example, State officials told us they conduct these visits when requested by companies. Consequently, companies without knowledge of the export regulations would not know to request this additional assistance. Commerce officials told us the agency conducts over 100 company training seminars nationwide annually on topics ranging from an exporting primer, product classifications, and deemed exports for both novice and experienced exporters. These seminars are held in conjunction with local business cosponsors, and Commerce develops specific training topics to reflect the interests of local industry. Commerce officials told us they conduct a limited number of visits to specific companies as part of their company outreach, which are usually prompted by information and intelligence obtained through their compliance efforts. Such training and outreach is particularly important because we found during our company interviews that newly-formed smaller businesses working in advanced technology areas were not as aware of the extent of their responsibilities to protect export-controlled information, and their company officials suggested that their protection measures did not follow best practices to safeguard such information as used by experienced exporters. Furthermore, in our prior work we recommended that Commerce and State should better coordinate their efforts on analysis and export oversight. Government export control agencies use a variety of means—including Internet Web sites, advisory opinions, and company training to communicate information on export controls to industry. However, we found that because these agency outreach and training efforts are not developed based on a thorough knowledge of the risks associated with such exports, they do not specifically address the protection of export- controlled information. Agency Internet Web sites: Commerce and State have Internet Web sites that provide the public information about the agencies’ export control roles and responsibilities. However, these Web sites do not communicate information such as industry best practices or identify specific protection measures for companies to use to securely transfer export-controlled information electronically. For example, we found while Commerce’s Web site provides information to businesses on the Export Administration Regulations, such as frequently asked questions and guidance for deemed exports, it does not provide information on measures companies could use to protect the transmission of export- controlled information, such as encrypting e-mails used to transmit export-controlled information to a company’s foreign subsidiary. State’s Web site does not provide information or guidance to exporters on accepted practices for protecting export-controlled information and managing deemed exports, such as suggested security measures to implement when foreign employees work in close proximity to export- controlled information. Almost one fourth of the company officials we interviewed told us they would like additional guidance on export- controlled information posted on Commerce’s and State’s Web sites, such as agency-accepted employee training on export-controlled information. Commerce and State export control officials told us they have not provided such guidance on their Internet Web sites for reasons such as their inability to keep current on developments in these areas, such as recommended particular encryption standards, and possible liability issues related to recommending a particular protection measure. In 2004, the Office of Management and Budget (OMB) endorsed recommendations from the Interagency Committee on Government Information on guidelines to help make federal agency Web sites more user-friendly and to better enable companies to understand agencies’ regulatory requirements. These standards for agency Web sites include providing a list of frequently asked questions to users and Web links to other federal agencies that can provide additional information on a particular issue. State’s Web site does not provide users with answers to frequently asked questions, such as common questions companies have on the export process. The State Web site also does not link to the Commerce Web site or provide information on best practices companies use to comply with the regulations. By providing this type of information on its Web site, State could help enhance its communication to companies and alleviate company confusion surrounding the protection of export-controlled information. Advisory Opinions: As part of their export control activities, Commerce and State provide nonbinding advice to companies, called advisory opinions, on specific questions they submit to the agencies regarding the export regulations. Officials from about two fifths of the companies we interviewed told us they submitted questions to the agencies regarding export-controlled information. However, under the Commerce and State advisory opinion programs, the agencies do not publicly share all agency responses to these requests for guidance and information due to concerns about inadvertently releasing a company’s proprietary information to the public as well as agency officials’ judgment that such opinions do not have broad utility to the export community. From our review of Commerce’s and State’s export control activities, we found while Commerce provides a few public examples of advisory opinions on its Web site that address deemed exports and the employment of foreign nationals, none specifically address the electronic transfer of export-controlled information. State officials told us State does not provide any advisory opinions to the public. By publicizing their advisory opinions, Commerce and State could possibly leverage their limited outreach resources and help a greater number of companies attain clarifying information on agency policies on export- controlled information. Other federal agencies, such as the Department of Labor (DOL), share advisory opinions with the public on their Web sites but redact company proprietary information to protect identifying information. This allows other companies with similar questions to benefit from the additional agency guidance. One company export control official we interviewed suggested companies could submit two letters simultaneously to either Commerce or State to request advisory opinions on export control issues. In the first letter the company would include all necessary information to distinguish the export, so the agency could make an appropriate decision on the specific export control matter. In the second letter the company would redact all proprietary and company identifying information, which the agency would be allowed to publicize to other companies. DOL uses this approach to alleviate itself of the burden from identifying and redacting proprietary information from advisory opinions it shares publicly. Agency Training on Export-Controlled Information: While Commerce and State provide export-control training to companies, we found the agencies do not strategically target companies and industry sectors where the greatest risk of violations of the export regulations on export-controlled information may exist. While Commerce and State have significantly different approaches towards company training, neither offers specific training opportunities focusing exclusively on export-controlled information. Furthermore, officials from approximately 20 percent of the companies we interviewed told us agency training on export controls does not provide specific guidance to companies on the adequate protection of export-controlled information. For example, these officials said agency training does not provide information protection options to companies, such as using dedicated communication lines for e-mail transmissions or limiting employee access to servers that contain export-controlled information. Company officials told us government-sponsored training does not target smaller companies new to the exporting process, which may not be familiar with necessary measures to securely transfer export- controlled information. Furthermore, we found agency training, in particular State’s training, is limited to specific geographic regions of the U.S., which company officials stated hinders smaller companies with limited budgets from attending. Although State and Commerce have separate export control jurisdictions, the 2004 Interagency Offices of Inspector General report stated that Commerce and State could improve their outreach by providing joint training that explains the differences between the two agencies’ licensing requirements and procedures—a recommendation that, according to the report, was shared by company officials. The globalization of the U.S. economy and economic interdependence with the rest of the world has many dimensions. While the export of controlled information from U.S. companies to foreign business partners is a key component to maintaining a strong and developing economy, the improper export of such technology can be detrimental to U.S. security and economic interests. Developing effective oversight to help ensure the protection of export-controlled information poses a challenge to the federal agencies responsible for export control. These risks may increase as electronic communications and information-transfer capabilities used by companies that export-controlled information continue to grow. Moreover, the lack of coordination between Commerce and State on outreach, analysis, and oversight could hamper their ability to determine whether export-controlled information may be at risk when foreign nationals are in U.S. company settings. Without leveraging and properly utilizing available export license data, these agencies will not be able to fully understand and assess potential risks associated with the export of controlled information and develop the proper protections and outreach to help mitigate the risks associated with such information. Further, in the absence of guidance from the government, some U.S. companies may not fully understand these associated risks and the need for applying corresponding measures of protection. To improve the Department of Commerce’s oversight of export-controlled information at companies, we recommend that the Secretary of Commerce direct the Administrator of the Bureau of Industry and Security to take the following actions: Strategically assess potential vulnerabilities in the protection of export- controlled information using available resources, such as licensing data, and evaluate company practices for protecting such information. Based on such a strategic assessment, improve its interagency coordination with the Department of State in the following areas (1) provide specific guidance, outreach, and training on how to protect export-controlled information and (2) better target compliance activities on company protection of export-controlled information. To improve the Department of State’s oversight of export-controlled information at companies, we recommend that the Secretary of State direct the Director of the Directorate of Defense Trade Controls to take the following actions: Strategically assess potential vulnerabilities in the protection of export- controlled information using available resources, such as licensing data, and evaluate company practices for protecting such information. Based on such a strategic assessment, improve its interagency coordination with the Department of Commerce in the following areas (1) provide specific guidance, outreach, and training on how to protect export-controlled information and (2) better target compliance activities on company protection of export-controlled information. We provided a draft of this report to the departments of Commerce, Defense, and State for their review and comment. Commerce and State provided written comments, which are reprinted in appendixes II and III, respectively. Defense did not have any comments on our draft report. Commerce generally agreed with our recommendations to assess potential vulnerabilities related to export-controlled information and to conduct more targeted outreach and compliance activities. Commerce, in its response, described planned and recent activities related to its oversight and outreach efforts on deemed exports, such as the Deemed Export Advisory Committee and increased export outreach and compliance activities. While these activities address some unique cases where companies are required to have a Technology Control Plan (TCP) in place when employing foreign nationals, they do not fully address how to protect export-controlled information when transferred electronically and by other intangible means. As noted in our report, almost half of the company officials we interviewed told us they have difficulty determining the proper measures to protect export-controlled information. Commerce also cited a September 2006 American Society for Industrial Security trade association meeting where it addressed the protection of export- controlled information. Actions such as this, if conducted on a regular basis, could improve companies’ understanding of how to protect export- controlled information in today’s commonplace business transactions, such as e-mail, e-commerce exchanges, and intracompany transfers. State agreed with our recommendation to improve guidance for exports of controlled information and disagreed with our report’s finding that it does not assess the potential vulnerabilities associated with export-controlled information. State responded that it recently tasked its Defense Trade Advisory Group to develop a best practice guide for industry on how to comply with the regulations. Such guidance, particularly if it addresses export-controlled information and is shared on State’s Web site, can help to improve companies’ understanding of accepted practices for protecting such information. Regarding its assessment of potential vulnerabilities associated with export-controlled information, State responded that its individual licensing and compliance activities strategically target its concerns related to exports of controlled technical data. State added that its assessments of the vulnerabilities and risks associated with export- controlled information form the basis for topics addressed at training events and industry conferences, as well as many regulatory changes. While State’s activities may help inform its individual licensing decisions and identification of specific companies for possible compliance visits, we found that State is not proactively using available information to strategically assess the vulnerabilities associated with the transfer of export-controlled information. For example, we found State does not use available data from its licensing activities to strategically target established and emerging business sectors to aid in its monitoring and oversight of exports of controlled information. These license data and company records provide a pool of information, which State could analyze to help discern trends in export-controlled information. Furthermore, State told us its outreach visits do not consider export-controlled information in determining companies to visit and we found that State’s training does not provide specific guidance on export-controlled information. Broader assessments of the risks and vulnerabilities associated with export-controlled information will help the department identify ways to improve its oversight of these exports and its guidance to companies. We are sending copies of this report to appropriate congressional committees and to the Secretary of Commerce, the Secretary of Defense, the Secretary of State. Copies will be made available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or John Neumann, Assistant Director. Other major contributors to this report were Marie Ahearn, Patrick Baetjer, Jessica Berkholtz, Amanda Seese, Karen Sloan, Najeema Washington, and Anthony Wysocki. To assess how the government’s export control processes apply to the protection of export-controlled information by U.S. companies, we analyzed the export control regulations, policies, and compliance practices of the Department of State and the Department of Commerce. Our analyses of the regulations included the review, comparison, and contrast of the Department of State’s International Traffic in Arms Regulations (ITAR) and the Department of Commerce’s Export Administration Regulations (EAR), identifying information pertinent to the export of controlled information via electronic means and other intangible transfers, or through foreign national access. We also reviewed export- control policies and practices within the Department of Defense, including proposed changes to the Defense Federal Acquisition Regulation Supplement (DFARS) to identify requirements related to export controls and foreign national access to sensitive information. We interviewed officials from DTSA to gain more information regarding the agency’s activities as they relate to the export control practices and policies of Commerce and State. We interviewed agency officials from the Commerce Department’s Bureau of Industry and Security (BIS) who perform export control related functions, such as enforcement and administration. Within the State Department’s Directorate of Defense Trade Controls (DDTC), we interviewed officials from the areas of licensing, compliance, and policy to obtain information on agency efforts to protect export-controlled information. We also analyzed information on existing data the lead agencies have at their disposal regarding the export of controlled information. To assess steps the government has taken to identify and mitigate risks in protecting export-controlled information, we analyzed Commerce’s and State’s use of existing resources, such as licensing data, to identify trends and vulnerable areas within company transfers of controlled information and assessed each agency’s export control training and outreach programs. We examined the extent to which agency resources are leveraged to mitigate risks associated with the export of controlled information by reviewing other government-accepted forms of risk assessment. We reviewed our prior work on risk assessment, which includes items such as the Federal Information Systems Controls Audit Manual and the Internal Control Management and Evaluation Tool. To assess Commerce’s and State’s export control training and outreach programs, we reviewed each agency’s Web site and training materials issued by the agencies. We assessed training seminars sponsored by the Departments of State and Commerce. Specifically, we reviewed information and practices used at Society for International Affairs (SIA) conferences, which State sponsors, and BIS training seminars. We also reviewed the agencies’ methodologies for conducting company outreach visits. As part of our work, we attended several agency-sponsored export control training events aimed at increasing company knowledge of the export control regulations. To further assess our objectives, we interviewed officials from 46 U.S. companies. We asked them how they protect export-controlled information through the use of internal controls. We reviewed, and in some instances obtained various company export control-related documents including, internal control plans, technology control plans, training manuals related to export controls, and policies regarding the transfer of electronic controlled information, including when accessed by foreign national employees. We also asked company officials to share their views and experiences regarding government training and outreach pertinent to the area of export-controlled information. Company officials responded to our targeted questions regarding export-controlled information, including views on the effectiveness of government training seminars, the extent of content provided on agency Web sites, and the quality of advice provided on agency customer service telephone lines. We selected our sample of 46 companies from a universe of companies we developed to represent a wide variety of companies, industry types, and exporting experiences by analyzing the following sources and databases: Commerce Department’s Export Control Automated Support System (ECASS) export license database, looking specifically for companies that held licenses in the D (Software) and E (Technology) product groups, which are more prone to be export-controlled information, for fiscal years 2000-2004. State Department’s Defense Trade Application (DETRA) licensing database, looking specifically for companies that held a permanent license for the export of technical data, which are more prone to be export-controlled information over fiscal years 2000-2004. DOD’s Contracting Action Report database (DD 350), for Research Development Test and Evaluation (RDT&E) contracts with small businesses that are more prone to be export-controlled information, for fiscal years 2000-2004. Commerce’s and State’s industry outreach, training, and advisory committee membership lists. Industry-specific company directories and our work with agency and industry experts. To select companies from the universe that represented a range of company experiences, we applied selection criteria, specifically; companies had to meet at least one of the following criteria: Held a Commerce Department ECASS export license in the D (Software) and E (Technology) product groups. Held a State Department DETRA permanent license for technical data. Held both Commerce and State export licenses. Specifically, the company held both the aforementioned Commerce Department ECASS export licenses as well as the State Department DETRA licenses. Exporter frequency. We classified a company as a high, medium, or low frequency exporter based upon its number of export applications submitted to Commerce, for the Commerce ECASS D&E product group licenses; and State for DETRA permanent technical data licenses, using the following categories: high—800 or more licenses, medium—100-799 licenses, and low—1-99 licenses. Had a foreign employee presence. The company held Commerce and/or State export licenses for the export of controlled information to its foreign national employees, or conducts business with foreign subsidiaries or partners. Was a small business recipient of a DOD RDT&E contract, for fiscal years 2000-2004. Were new exporters or potential exporters, in the process of applying for an export license to either Commerce or State. We did not generalize the information and findings we developed from our work with these 46 companies to the broad universe of all U.S. companies that export. We conducted this review from January through November 2006 in accordance with generally accepted government auditing standards.
The U.S. government controls exports of defense-related goods and services by companies and the export of information associated with their design, production, and use, to ensure they meet U.S. interests. Globalization and communication technologies facilitate exports of controlled information providing benefits to U.S. companies and increase interactions between U.S. and foreign companies, making it challenging to protect such exports. GAO assessed (1) how the government's export control processes apply to the protection of export-controlled information, and (2) steps the government has taken to identify and help mitigate the risks in protecting export-controlled information. To do this, GAO analyzed agency regulations and practices and interviewed officials from 46 companies with a wide range of exporting experiences. U.S. government export control agencies, primarily the departments of Commerce and State, have less oversight on exports of controlled information than they do on exports of controlled goods. Commerce's and State's export control requirements and processes provide physical checkpoints on the means and methods companies use to export controlled goods to help the agencies ensure such exports are made under their license terms, but the agencies cannot easily apply these same requirements and processes to exports of controlled information. For example, companies are generally required to report their shipments of export controlled goods overseas with Customs and Border Protection for exports made under a license, but such reporting is not applicable to the export of controlled information. Commerce and State expect individual companies to be responsible for implementing practices to protect export-controlled information. One third of the companies GAO interviewed did not have internal control plans to protect export-controlled information, which set requirements for access to such material by foreign employees and visitors. Commerce and State have not fully assessed the risks of companies using a variety of means to protect export-controlled information. The agencies have not used existing resources, such as license data, to help identify the minimal protections for such exports. As companies use a variety of measures for protecting export-controlled information, increased knowledge of the risks associated with protecting such information could improve agency outreach and training efforts, which now offer limited assistance to companies to mitigate those risks. GAO's internal control standards highlight the identification and management of risk as a key element of an organization's management control program. GAO also found that Commerce's and State's communications with companies do not focus on export-controlled information. For example, Commerce's and State's Internet Web sites do not provide specific guidance on how to protect electronic transfers of export-controlled information, a point raised by almost one fourth of the company officials GAO interviewed.
According to the FAR, a wide selection of contract types is available to the government and contractors to allow flexibility in acquiring a variety of products and services. Contract types vary according to the degree and timing of the contractor’s responsibility for the costs of performance and by the amount and nature of the profit incentive offered to the contractor for meeting or exceeding specified goals. Contract types are grouped into two broad categories: fixed-price or cost-reimbursement contracts. Within these categories, the specific types range from firm-fixed-price (FFP), in which the contractor has full responsibility for the costs of performance and the resulting profit or loss, to cost-plus-fixed-fee (CPFF), in which the contractor has minimal responsibility for the costs of performance and the fee is a fixed dollar amount, established as a percentage of the estimated target cost at the start of the contract. In between these are incentive contracts, in which the contractor’s responsibility for costs and the profit or fee incentives offered are tailored to performance uncertainties. The FAR also notes that incentive contracts are appropriate when a firm- fixed-price contract is not, and the required items can be acquired at lower costs and possibly with improved delivery or technical performance by tying fee or profit to the contractor’s performance. Incentive and award fee provisions can be used together in the same contract, but each uses a different approach with respect to how performance is assessed and how fees or profits are determined. Incentive fees—For contracts with incentive fees or profits, the amount of fee or profit payable is related to the contractor’s performance. Incentive fees or profits generally focus on cost control, though they may be used to motivate performance toward specific delivery (e.g., schedule) targets or technical goals. Incentive fees or profits involve an objective evaluation by the government through a process that is generally less administratively burdensome than award fee evaluations. The government usually applies a fee- or profit- determination formula that is specified in the contract to evaluate performance at the end of the contract or at program milestones. The formula may include a target cost, a target profit or fee, a ceiling price, and a profit- or fee-adjustment formula, sometimes referred to as a share ratio for fixed-price incentive (FPI) contracts (see figure 1). In the hypothetical example above, if the contractor incurred $100 in costs, the contractor would receive $10 in profit, and the government would pay the target price of $110. If the contractor kept costs under the $100 target cost, it would split cost savings equally with the government and receive a larger profit; if the contractor exceeded the cost target, it would share the additional costs with the government up to the ceiling price but earn a smaller profit. At ceiling, the contractor earns no profit, and the contractor is responsible for any costs incurred above the ceiling price. A cost-plus-incentive-fee (CPIF) contract reimburses the contractor for its allowable costs, but still uses a formula of total allowable costs to target costs to determine fee and includes a target fee instead of a target profit. A CPIF contract also has a minimum fee—the lowest fee the contractor may receive when total allowable costs exceed target costs—and maximum fee—the highest fee the contractor may earn when total allowable costs are less than target costs, and, unlike an FPI contract, there is no ceiling price. Award fees—Award fees typically emphasize multiple aspects of contractor performance in areas that are more subjectively assessed, such as the contractor’s responsiveness, technical ingenuity, or cost management. From the government’s perspective, development and administration of award fee contracts often involve substantially more effort over the life of a contract than incentive fee contracts, requiring government officials, through an award fee evaluation board, to conduct periodic evaluations of the contractor’s performance against specified criteria and to make recommendations on the amount of fee to be paid. Criteria are specified in an award fee plan, which contracting officials may revise from one evaluation period to another to redirect contractor emphasis. Following the award fee evaluation, a fee-determining official makes the final decision about the amount of fee paid to the contractor. Table 2 identifies the range of incentive contract types and their appropriate use based on acquisition regulations. We have previously identified issues with the use of incentive and award fees that called into question whether they were used effectively to achieve their intended purpose. In our December 2005 review of incentive and award fee contracts, we found that award fees were generally not linked to acquisition outcomes, and that DOD had paid an estimated $8 billion in award fees regardless of outcomes. In addition, we estimated that in 52 percent of the award fee contracts, DOD moved unearned award fees from one evaluation period to a subsequent period—a practice referred to as “rollover”—which provides contractors at least a second chance to earn fees after failing to perform well enough to earn them initially. We also found that DOD had not compiled data, conducted analyses, or developed performance measures to evaluate the effectiveness of incentive and award fees. We recommended that DOD apply more outcome-based award fee criteria, pay award fees only for above satisfactory performance, issue guidance on the appropriate use of rollover, develop a mechanism for capturing incentive and award fee data within existing data systems, and develop performance measures to evaluate the effectiveness of incentive and award fees at improving contractor performance and achieving desired outcomes. DOD concurred with two of these recommendations, and partially concurred with our recommendations to only pay award fees for above satisfactory performance, collect incentive and award fee data, and develop performance measures to evaluate the effectiveness of incentive and award fees. DOD implemented all but one of these recommendations— paying award fees only for above satisfactory performance—though a provision was later added to the FAR prohibiting payment of award fees for below satisfactory performance. In our May 2009 review of 50 DOD contracts containing award fees, we found that DOD had made progress toward minimizing payments of award fees for unsatisfactory performance, limiting overpayment for satisfactory performance, and reducing the number of programs that used rollover. DOD, however, still struggled to use data collected on award fee contracts to evaluate their effectiveness. We did not make new recommendations in these areas. Most recently, in a March 2017 report on selected FPI contracts awarded by the Navy for new ship construction, we found the Navy often structured the contracts such that it absorbed more cost risk than DOD’s regulation suggests, indicating it may not achieve the expected benefits of using the FPI contract type. For example, we found that 8 of 11 ships delivered under the contracts reviewed experienced cost growth. We recommended that DOD conduct a portfolio-wide assessment of the Navy’s use of additional incentives on FPI contracts across shipbuilding programs. DOD concurred with our recommendation. In 2007, OMB issued government-wide guidance highlighting preferred practices for incentive contracting and directing agencies to review and update their acquisition policies. In 2009, the FAR was revised to implement legislative provisions and OMB’s guidance on the appropriate use of incentive contracts. These changes addressed some of the issues that we identified in 2005 and 2009 relative to the use of award fees. The FAR now prohibits rollover of unearned award fees from one evaluation period requires award fees to be linked to cost, schedule, and technical performance acquisition objectives; restricts payment of award fees in instances of unsatisfactory contractor performance; and requires agencies to collect relevant data on incentive and award fee payments and evaluate the effectiveness of these contract types in achieving desired outcomes. Since 2010, DOD has taken steps to improve its use of incentive contracts—often beyond what is required by the FAR—by revising the Defense Federal Acquisition Regulation Supplement (DFARS), instituting its Better Buying Power initiative, and developing new guidance and training courses. In particular, DOD has emphasized the use of objective incentives through FPI and CPIF contracts rather than award fees whenever possible, in part to better motivate contractors to control costs. These efforts are reflected in DOD’s reported use of incentive contracts since 2010, which indicate a substantial growth in obligations for incentive fee contracts and a corresponding decrease in obligations for award fee contracts. DOD made changes to the DFARS and its accompanying Procedures, Guidance, and Information in recent years intended to support appropriate use of incentive contracts. For example, DOD made the following updates to the DFARS in 2011: Directed contracting officers to utilize objective criteria—associated with incentive fee contracts—to the maximum extent possible for measuring contract performance. DOD noted concerns that award fee contracts have a limited ability to motivate contractors to control costs, and that there had been instances in which award fee payments were not consistent with outcomes. Directed contracting officers to give particular consideration to FPI contracts, especially for acquisitions moving from development to production or in contracts for which previous FFP contract costs had varied by more than 4 percent from negotiated costs. By looking at historical pricing and contract performance data, DOD officials stated they determined that in some FFP contracts, actual costs have come in noticeably lower than negotiated costs (e.g., at 4 percent or more), indicating that costs are not stable or that the government may not have negotiated a good deal when it awarded the contract. Directed contracting officers to include a contract clause prohibiting the payment of award fees when a contractor’s performance is rated below satisfactory, as required by the FAR, which emphasizes that the amount of award fee paid should correspond with the contractor’s performance. DOD released memoranda between 2010 and 2015 through its Better Buying Power initiative, which focused, in part, on the use of incentive contracts. The Better Buying Power memoranda established a preference for FPI contracts and advised contracting officers to increase the use of this contract type, when appropriate, such as early in production and in single-source production where year-over-year price improvement can be rewarded. DOD acknowledged that some officials interpreted the first memorandum to mean that FPI contracts should be used to the exclusion of other contract types. As a result, subsequent memoranda advised officials to consider the full range of contract types and employ the appropriate type, while giving particular consideration to FPI and CPIF contracts. In addition, Better Buying Power called to limit the use of award fee contracts for services, noting that services acquisitions should be predisposed to FFP, CPFF, or CPIF. Better Buying Power also instructed the military departments to provide a justification of contract type for proposed contracts over $100 million for major programs. Better Buying Power also called for new DOD guidance on selecting contract types and employing incentive contracts. Subsequently, in April 2016, Defense Procurement and Acquisition Policy (DPAP) released Guidance on Using Incentive and Other Contract Types. This guidance provides direction on selecting contract types, structuring appropriate incentive arrangements, and negotiating target costs and share ratios with contractors. It explains that, when appropriately structured, incentive contracts can allow the government to share in cost savings, focus the contractor on the areas that are important to the government, and provide the government with valuable data on actual costs incurred. It also reinforces some key updates to DFARS, such as emphasizing that objective criteria must be used whenever possible to measure contract performance. DOD’s DAU is developing two new continuous learning courses to reinforce concepts reflected in the April 2016 guidance. According to DAU, all contracting personnel involved in using incentive arrangements will be encouraged to take these courses, though they are not required for Defense Acquisition Workforce Improvement Act certification, a process DOD uses to determine that acquisition officials meet certain standards. One course, Understanding Incentive and Other Contract Types, is currently available and provides training on how to align contract types and incentives with acquisition outcomes. The second course is expected to be available in August 2017 to provide training on appropriate use of advanced incentive concepts, such as quantifying cost, schedule, and performance risks and incorporating that information into decisions on contract incentives. DAU also provides other courses with elements addressing aspects of incentive contracting. Program officials—who can be involved in selecting contract types and structuring incentives, according to a senior DPAP official—also undergo some training on contract types and incentives through DAU training courses. Further, to inform selection of contract type, contract negotiations, and projections of program and contract costs, DPAP and the Director of Defense Pricing have encouraged collaboration among contracting officials, program officials, and cost analysts to collect and share cost information with one another. DOD has also required contracting officers to share information through CBAR—which captures information to assist contracting officers in preparing for negotiations with contractors, such as contractor business systems status and compliance with cost accounting standards. Contracting officials use CBAR to upload and share contract negotiation documents, which, according to DOD, can help contracting officials benefit from others’ experiences, particularly when negotiating with the same contractor. Finally, DOD has used independent management reviews, or peer reviews, to advise contracting officers on selecting the appropriate contract type and structuring and negotiating contract incentives, among other topics. The Director of Defense Pricing and DPAP lead peer reviews to ensure that certain high-dollar acquisitions are carried out in accordance with applicable laws, regulations, and policies. According to senior DOD and military department officials, peer reviews provide an opportunity for DPAP leadership to share knowledge about key contracting decisions. For example, one peer review advised the contracting officer to consider whether increasing the available fee under a CPIF contract would reduce costs to the government by incentivizing greater cost control from the contractor. DOD’s focus on incentive contracts is evident in many major defense acquisition program contracts. Based on information provided by DOD, as of January 2017 the department is using incentive contracts—either FPI or CPIF—on 65 of 78 major defense acquisition programs. The Army’s Patriot Advanced Capability-3 (PAC-3) program—which provides mobile defense against short-range ballistic missiles and other threats—offers an example of DOD’s use of FPI contracts in particular. According to DOD, recent contracts for PAC-3 reflect the department’s consideration of an FPI contract type for programs that previously used FFP contracts in which actual costs varied significantly from negotiated costs. Through an analysis of actual costs on prior production contracts for the PAC-3 program, DOD determined that the prime contractor was underrunning negotiated costs—that is, actual costs were lower—in these contracts in amounts ranging from 8 to 15 percent, triggering consideration of FPI type based on defense regulations. Consequently, after reviewing historical pricing data and applying lessons learned, Army contracting officials stated they were able to negotiate FPI contracts for missile production over fiscal years 2014 through 2016 that were a total of $860 million lower than the contractor’s initial proposals. Army officials stated that they plan to use cost information gathered through these contracts to determine the appropriate contract type and inform negotiations for future production contracts. Because these contracts are ongoing, however, actual costs and other outcomes have not yet been determined, though current costs for the fiscal year 2014 contract indicate it may result in a cost overrun, in which actual costs exceed target costs. Our analysis of data from FPDS-NG found that obligations for incentive contracts ranged from about 13 to 22 percent of DOD’s total annual contract obligations from fiscal year 2005 through fiscal year 2015. In fiscal year 2015, incentive contracts accounted for nearly 18 percent of DOD’s total annual contract obligations. Consistent with DOD’s emphasis on using incentive fee contracts and decreasing the use of award fee contracts, our analysis of DOD’s reported contract obligations from fiscal years 2005 through 2015 shows a shift toward using incentive fee contracts (see figure 2). More specifically, our analysis found that the changes were largely driven by increased obligations for FPI contracts, and decreased obligations for CPAF contracts (see figure 3). Among specific products and services for which incentive contracts were used, we found the mix varied by contract type. Specifically: FPI contracts were mostly used to purchase products. In fiscal year 2015, three product categories accounted for almost 90 percent of FPI obligations: aircraft, ships/submarines, and land vehicles; weapons and ammunition; and sustainment supplies and equipment. CPIF contracts were used for a mix of products and services: aircraft, ships/submarines, and land vehicles; research and development; equipment related services; weapons and ammunition; and knowledge based services—such as engineering, program management, and education and training—accounted for roughly three quarters of CPIF obligations in fiscal year 2015. Award fee contracts were mostly used for services, including facility related; transportation; and equipment related services. DOD expects to achieve positive cost outcomes—with contractors’ estimated costs coming in lower than target costs—for most of the 21 selected incentive fee contract actions we were able to measure. Overall, the estimated costs for the incentivized portions of these selected contract actions were about $30 million—or about 5 percent—below target costs. Among the contract actions we reviewed, schedule and technical performance incentives were included in multiple-incentive contracts. Officials reported good outcomes overall for the contracts with multiple incentives that we reviewed, but we could not isolate the effects of any particular schedule or technical performance incentive. The nine award fee actions in our sample, which were mainly used to procure services, did not allow for rollover and payments for unsatisfactory performance—both of which were issues we found in our prior work. DOD collects some information on incentive contracts, but it generally has not assessed the extent to which particular contract types or incentive arrangements have achieved cost, schedule, or technical performance goals. DOD expects contractors to underrun cost targets for incentive fee or profit provisions in 15 of the 21 cases for which we could compare target and estimated costs. Overall, these contract actions were expected to underrun target costs on the incentivized portions of the contracts by about 5 percent, amounting to $30 million in expected savings shared between the government and contractors on the 21 contract actions. These 21 contracts had a total value of about $957 million. The contract actions that were expected to underrun target costs represented procurements of both goods and services, and were a mixture of FPI and CPIF actions (see table 3). In two cases, contracting officials raised potential benefits of using an FPI contract that extended beyond underrunning target costs in a single contract by using knowledge of cost efficiencies to decrease prices from one production lot to the next. For example, contracting officials for one Air Force missile program determined that an engine produced by a subcontractor was a cost driver, but had little insight into the actual costs for this component. The Air Force created a separately-priced FPI line item with unit pricing for the engines. Using an FPI contract type required the contractor to provide cost data, a requirement that was to be incorporated in turn into the prime contractor’s FPI agreement with its subcontractor. Officials reported that through this arrangement, they obtained insight into subcontractor costs and were able to obtain cost savings in subsequent lots of more than $104,000 per engine. We were unable to compare actual or estimated costs and target costs for 5 of the 26 FPI or CPIF actions in our sample due to various factors. Specifically: In three cases, contracts were terminated before performance was complete. Upon termination, the government and contactor negotiated a final settlement which accounted for factors in addition to cost, and the original target costs—which were based on the assumption of completed performance—were no longer relevant. In another case, officials explained that requests for adjustments and other actions had not been finally settled for this order, so the actual costs will likely change further. Because these adjustments and actions had yet to be settled, we did not compare the target costs and current estimates for actual costs. In one additional case, contract records did not differentiate between target costs and actual costs, and so we were unable to compare the two. Contracting officials reported generally positive cost, schedule, technical performance, and overall outcomes for those contract actions containing schedule and technical performance incentives. Fifteen of the 26 FPI or CPIF contracts in our review used schedule or technical performance incentives along with cost incentives, and therefore were multiple- incentive contracts. Three of the 15 contracts used a combination of incentive fees or profits and award fees. For 13 of the 15 contracts we reviewed with schedule or technical performance incentives, contracting officials reported that overall outcomes were positive. In two cases, although the contractor met specific schedule and technical performance goals, overall outcomes were either unsatisfactory or were yet to be determined (see table 4). DOD’s April 2016 guidance, issued after the contracts we reviewed were awarded, advises contracting officials to carefully consider the use of multiple incentives that may compete with one another. In most cases, acquisition planning documents we reviewed indicated that contracting officials used multiple incentives in contracts with the goal of encouraging the contractor to achieve specific cost, schedule, or technical performance targets, while ensuring that achieving one outcome did not come at the expense of others. The DOD guidance notes that the contractor will aim to maximize the profits or fees it earns, and consequently make trade-offs that may not be consistent with how the government views the relative importance of the various incentives. For contracts with multiple incentives that we reviewed, it was unclear how these incentives interacted, and we could not isolate the effects of any particular schedule or technical performance incentive. Representatives of the two contractors with whom we spoke indicated that they did not have a precise method for making trade-offs among incentives, and could not tell us how the presence of any particular incentive may have interacted with other incentives. Representatives of one contractor told us that they viewed the technical performance incentives as a chance to “make up” what they “lost” on the cost incentives. In other words, if they did not receive all possible profits or fees from cost incentives, they could still aim to earn profits or fees from technical performance incentives. Representatives from the other contractor told us that the schedule incentive fee was not a primary factor motivating them to maintain schedule. The amount of potential schedule incentive was less than half of the potential cost incentive and technical performance award fees, so the schedule incentive was likely less effective than these other incentives. Nine of the 35 contract actions we reviewed contained only award fee provisions. All nine of these contract actions were to procure services, such as providing support for testing, data collection, and experimentation; weapons testing; and operation of military base childcare facilities. Of the nine contract actions, three were FPAF actions, and six were CPAF actions (see table 5). These contract actions contained a mix of incentives targeting cost, schedule, and technical performance. For the eight actions we could assess, contractors earned 90 percent of potential award fees overall. Award fees earned on the eight actions for which we had data totaled $16.1 million between fiscal years 2011 and 2015. In cases where multiple types of outcomes (e.g., cost, schedule, and technical performance) were targeted, it was sometimes not possible to determine the amount of fee awarded based on a particular outcome. For example, schedule was a subpart of one of several categories for the award fee, while award fee memoranda reported only a single overall category score. Therefore, in those cases, it was not possible to determine from the award fee documentation what portion of the fee award was meant to correspond specifically to schedule performance. The contract actions we reviewed generally addressed some issues we had previously identified in relation to award fees, and reflected changes made to federal acquisition regulations in 2009. For example, we found that all of the nine contract actions contained provisions prohibiting rollover. Most of the award fee plans for these actions specifically prohibited paying award fees for unsatisfactory performance and we found no evidence of fees earned for unsatisfactory performance. DOD guidance says award fees should be tied to acquisition outcomes to the maximum extent possible. We reviewed the criteria contained in the award fee plans for our sample contract actions and found them to contain a mix of outcome-based criteria and process-based criteria. For example, three award fee plans covering five contract actions contained criteria related to meeting schedule deadlines, which we consider to be related to outcomes. One award fee plan contained criteria related to management, such as proactivity and responsiveness to problems, which we consider to be related to process. In the nine contracts we reviewed, we also found examples of award fees based on both outcome- and process-based criteria. For example, one contract, valued at $78 million for management and operation of transportation services on an Army installation, specified that 25 percent of the award fee would be based on technical performance criteria. Under these criteria, points were assigned on the basis of quality and timeliness assessments, which we consider to be outcome-based, as well as reporting, which we consider to be process-based. Similarly, a $16 million Army contract for installation support services based 75 percent of its award fee on a “Performance of Work” criterion, which contained several sub-criteria including “Quality” and “Efficiency/Timeliness,” both of which contained a mix of outcome- and process- based criteria. Even where award fee plan criteria were outcome-based, we could not assess whether the outcomes cited were identified by DOD as positive outcomes for the acquisition of services. As we have previously reported, DOD has struggled to define and track desired outcomes for services contracts, which differ from products in several aspects and can pose challenges to establishing measurable and performance-based outcomes. DOD has not consistently assessed how the selection of a particular contract type or incentive arrangement has promoted the achievement of cost, schedule, or technical performance goals. A fiscal year 2007 legislative provision directed DOD to collect and evaluate relevant data on incentive and award fee payments on a regular basis to determine the effectiveness of incentives for improving contractor performance and achieving desired program outcomes, and in 2009, the FAR was updated to require this of all agencies. As a result, DOD previously required military departments and defense agencies to collect data on incentive and award fees—such as the amount of fees available on each contract and the amount paid to contractors—twice a year for contracts with incentive provisions greater than $50 million. According to a senior DPAP official, the information collected was not being used at the time, as DOD saw more value in focusing on efforts prior to contract award than analyzing incentive trends. Additionally, DPAP officials noted that the effort amounted to a manual data collection exercise during a time of reduced staffing levels. DOD rescinded this requirement in April 2015, explaining in the Federal Register that it could obtain relevant data through other sources, including CBAR and peer reviews. Our review found, however, that DOD is not using these sources to assess the effectiveness of incentive contracts. Further, we found that CBAR, peer reviews, and other potential sources DOD identified have limited utility in providing information to assess the effectiveness of improving contractor performance and achieving desired program outcomes (see table 6). These systems have distinct purposes and are not specifically intended to provide information for DOD to analyze how well it is achieving incentive outcomes, though some have the potential to provide insight into outcomes. For example, DOD officials said they used Cost and Software Data Reporting and earned value management data—which are maintained in the CADE system— to evaluate the effectiveness of factors that motivate contractor performance in its 2014 annual report on performance of the defense acquisition system. Among the contracts that DOD reviewed, it found that those with incentive fees or profits typically experienced lower cost growth than other contract types. This analysis, however, used total contract costs (including non-incentivized portions), according to senior DOD officials. DOD has not conducted this analysis in subsequent annual reports. DOD has also aggregated feedback from past peer reviews—including reviews focused on incentives—and identified lessons learned and best practices for structuring incentives. DOD’s efforts to better manage its acquisition of services, which accounted for more than half of the $274 billion in total DOD contract obligations in fiscal year 2015, could also help assess the merits of using incentives for various portfolios of services. As part of these efforts, DOD has identified senior officials within DOD to serve as functional domain experts responsible for specific portfolios of services. DOD’s January 2016 services acquisition instruction tasked these experts, among other responsibilities, to identify and share portfolio group best practices and employ lessons learned to improve the acquisition and management of services across their respective categories, and develop appropriate metrics to track cost and performance of contracted services within the portfolio group to leverage best practices, reduce redundant business arrangements, identify trends, and develop year-to-year comparisons to improve the efficiency and effectiveness of contracted services. We have ongoing work to assess how DOD and the military departments are implementing aspects of DOD’s services acquisition instruction. As we noted in our February 2017 high-risk update, however, DOD does not have an action plan that would enable it to assess progress toward achieving its goals for improving service acquisitions, and its efforts to develop goals and associated metrics unique to each category of service it acquires are in the early stages of development. As previously noted, the FAR requires agencies to collect and evaluate relevant data on incentive and award fee payments on a regular basis to determine the effectiveness of incentives for improving contractor performance and achieving desired program outcomes. Since DOD removed the requirement from its own regulations in April 2015, it has not identified a new approach to collect the required information beyond using CBAR and peer reviews, which we found do not allow DOD to assess incentive outcomes as currently used. It may not be necessary to collect data on each individual contract, but rather identify what best meets the department’s needs for assessing performance outcomes and collect data accordingly. Without assessing how incentives have contributed to intended outcomes, contracting officers may be at a disadvantage in establishing appropriate incentives for the requirements they work to fulfill through contracts. The government should be vigilant about getting the best value for its dollar, and incentive and award fees can be effective tools for motivating contractors and achieving desired outcomes for DOD acquisitions, if appropriately applied. Over the past decade, DOD’s guidance and training has emphasized the use of objective incentives, which is reflected in a noticeable shift toward incentive fee contracts and away from the more subjective award fee contracts. As some of our prior work has found, however, incentives do not always lead to better outcomes. Given the emphasis on cost incentives, it is important that DOD determine whether and under what circumstances the use of these incentives is achieving intended cost objectives. Federal acquisition regulations require DOD to collect and analyze information on the use of incentives. However, it may not be necessary for DOD to embark on a broad, manual data collection effort similar to what has been tried in the past. Rather, DOD could focus its effort on specific areas of interest or risk to the department. For example, given the widespread use of incentive contracts on major weapon systems, DOD could continue to focus its analyses on the factors that facilitate or hinder the achievement of cost objectives or consider expanding the collection of information on lower dollar weapon systems. Alternatively, DOD could focus more on identifying what types of incentives prove useful for the different services DOD acquires as part of its efforts to manage portfolios of services acquisitions. Such efforts should not preclude DOD from continuing to assess other approaches to motivate contractor performance, such as the use of technical and schedule incentives. Prioritizing the area or areas DOD intends to focus on would enable the department to determine how best to collect that information and, in turn, use it to identify opportunities to improve the use of incentive contracts. We recommend that the Under Secretary of Defense for Acquisition, Technology and Logistics identify the specific types of information that would best meet the department’s needs and, based on that determination, collect and analyze relevant data after contract performance is sufficiently complete to determine the extent to which contracts with incentives achieved their desired outcomes. We provided a draft of this product to DOD for comment. In its comments, reproduced in appendix III, DOD concurred and indicated it will establish a process for identifying specific types of information to collect and assess the data after the completion of contract closeout to determine the extent to which incentives achieved their desired outcomes. DOD stated it will complete this process by the second quarter of fiscal year 2018. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Secretaries of the Air Force, Army, and Navy; the President, Defense Acquisition University; and the Director, Office of Management and Budget. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or DiNapoliT@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in Appendix IV. The objectives of this review were to (1) identify steps the Department of Defense (DOD) has taken to improve its use of incentive contracts since 2010, and (2) assess the extent to which selected DOD incentive contracts achieved desired acquisition outcomes. To identify the steps DOD has taken to improve its use of incentive contracts since 2010, we reviewed relevant legislation and provisions within the Federal Acquisition Regulation (FAR); memoranda issued by the Office of Management and Budget (OMB); and identified and reviewed changes to DOD regulations, policies, and guidance regarding the use of incentive contracts. To identify changes in DOD’s use of these contract types over time that may have corresponded with regulatory and policy changes, we analyzed data from the Federal Procurement Data System-Next Generation (FPDS-NG) on obligations by contract type for fiscal years 2005 through 2015. We reported our findings in constant fiscal year 2015 dollars, adjusted for inflation using the fiscal year gross domestic product price index. To assess the reliability of the FPDS-NG data, we conducted electronic testing of the data. We also reviewed a selection of 53 contracts in FPDS-NG and, in reviewing contract documents, found that eleven percent was incorrectly coded as being incentive contracts. We assessed these entries and found that they represented a range of contract award dates and products and services, and generally aligned with the distribution across military departments of contract actions in our sample. We determined that the miscoded contracts had minimal potential for impacting our analysis and that the data were sufficiently reliable to report general trends in DOD’s obligations for incentive contract actions. In addition, for a sub-selection of the 53 contracts and orders, we traced selected data fields to contract file documents to verify their accuracy. We also interviewed DOD, military department, and DAU officials about efforts to improve the use of incentive contracts. To determine the extent to which incentive provisions in selected DOD contracts achieved desired outcomes in cost, schedule, and technical performance, we assessed a nongeneralizable sample, drawn from FPDS-NG data, of incentive contracts and orders that DOD awarded between fiscal years 2011 and 2015 and that were reported as completed by the end of fiscal year 2015. We chose these timeframes so as to select awards that were made after regulatory and policy changes starting in 2010 and had completed performance by the time of our review. Because of these parameters, certain types of contract actions were inherently excluded from our analysis, including those that exceeded a performance period of 5 years, such as longer term satellite and shipbuilding contracts. For our contract selection, we excluded contracts with values below the $150,000 simplified acquisition threshold, indefinite delivery contracts, and blanket purchase agreements. We initially selected all 38 contracts that met these criteria. Out of more than 4,200 orders, we selected a subset of 15 that (1) included different incentive types under the same base contract, (2) were used to purchase similar products and services, and (3) reflected a range of dollar values. These parameters yielded an initial sample of 53 contract actions (contracts and orders). Eighteen of the 53 actions were dropped from our sample either because the contracts were miscoded in FPDS-NG and were not incentive contracts based on our review of contract documents (6), were terminated before performance began (1), or were actually not yet complete (11). Consequently, we reviewed a total of 35 contracts and orders representing fixed-price incentive (FPI), fixed-price-award-fee (FPAF), cost-plus-incentive-fee (CPIF), and cost-plus-award-fee (CPAF) contract types (see table 7). Because we used a nongeneralizable sample of contracts and orders, results from this sample cannot be used to make inferences about all incentive contracts and orders that DOD awarded. In total, we reviewed 5 contract actions from the Air Force, 7 from the Army, and 23 from the Navy. For each selected contract and order, we collected relevant documentation, such as the initial contract or order, modifications, statements of work, determination and findings memoranda, award fee plans, and performance evaluations. We interviewed contracting officials and obtained their written input to clarify and collect additional information as needed. To determine cost performance on CPIF and FPI contracts and orders, we compared actual or estimated costs to target costs listed in the conformed contracts or applicable contract modifications specifically for incentivized line items. For contracts where actual costs were not yet finally settled, we obtained the best available current estimate from contractor cost reports and similar sources. For contracts with multiple line items with incentives, we compared the total target costs to total actual or estimated costs. Within these types of contracts, if the contractor performed one line item at a cost overrun but another at a cost underrun, only the net results would be reflected in our findings. For contracts and orders with schedule and technical performance incentives, we identified schedule and technical performance goals and outcomes using the contracting office responses and contract documents. We obtained total action values from the conformed contracts. For contracts and orders containing award fees, we calculated the amount of award fees earned based on data provided by the contracting offices, including total action values and total potential award fee amounts, and corresponding contract documentation. We examined award fee plans to identify provisions related to rollover and criteria for award, and we examined award fee board memos and other documents to understand how award fee amounts were determined. To supplement our understanding of how incentive provisions helped achieve acquisition outcomes, we interviewed two contractors about the effectiveness of incentive provisions in motivating performance toward desired outcomes. To identify contractors to interview, we focused on a sub-selection of contracts containing multiple incentives to obtain perspectives on how the incentives interacted with respect to the contractors’ performance. The information obtained from interviews with the two contractors cannot be generalized to all contractors; however, the interviews provide important insights on the experiences of contractors. We also reviewed documents and interviewed DOD officials about relevant data systems and processes used to collect and analyze contract and program data, including the Contract Business Analysis Repository, the Cost Assessment Data Enterprise, the Contractor Performance Assessment Reporting System (CPARS), and peer reviews, to understand how these systems may provide some insight into DOD’s use of incentive contracts. Specifically, we reviewed 13 CPARS reports from contracts in our sample, selected for a distribution among the military departments and various contract types. We also reviewed a nongeneralizable selection of 30 peer review summaries from fiscal years 2014 and 2015 to develop an understanding of how peer reviews were used to provide guidance on key contracting decisions. The performance audit upon which this report is based was conducted from January 2016 to May 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. DOD deemed that certain information in the draft report related to contract costs and performance was sensitive and must be protected from public disclosure. We subsequently worked with DOD from May 2017 to July 2017 to revise our presentation of this information and prepare this report for public release. We analyzed data from the Federal Procurement Data System-Next Generation on incentive contract obligations by Department of Defense (DOD) component: Air Force, Army, Navy, and all other DOD. We found that, similar to DOD, the three military departments increased their obligations for incentive fee contracts overall from fiscal years 2005 through 2015. The only notable anomaly we found was that other DOD components obligated fewer dollars for incentive fee contracts over the period we reviewed (see figure 4). We also found that each of the military departments as well as other DOD components had decreased their obligations for award fee contracts from fiscal years 2005 through 2015 (see figure 5). In addition to the contact name above, the following staff members made key contributions to this report: Janet McKelvey (Assistant Director), Pete Anderson, MacKenzie Cooper, Brenna Derritt, Alexandra Dew Silva, Lorraine Ettaro, Kurt Gurka, Julia Kennon, Liam O’Laughlin, Carol Petersen, Raffaele (Ralph) Roffo, Ann Marie Udale, and Robin Wilson.
In fiscal year 2015, DOD obligated $274 billion on contracts for products and services, a portion of which was for contracts that used incentive and award fee provisions—or incentive contracts—intended to improve cost, schedule, and technical performance outcomes. Work by GAO and others has shown that such contracts, when not well managed, can lead to unnecessary costs shouldered by the American taxpayer. Beginning in 2010, DOD made regulatory and policy changes related to incentives. GAO was asked to review DOD's use of incentives. This report (1) identifies steps DOD has taken to improve its use of incentive contracts since 2010, and (2) assesses the extent to which selected DOD incentive contracts achieved desired acquisition outcomes. To conduct this work, GAO reviewed relevant federal and DOD guidance; analyzed DOD obligations and new contract award data for fiscal years 2005 through 2015, before and after regulatory and policy changes; and analyzed a nongeneralizable sample of 26 contracts and task orders that contained incentives and 9 contract actions providing for award fees that were awarded between fiscal years 2011 and 2015 and reported as completed by the end of fiscal year 2015 to assess contract outcomes. Since 2010, the Department of Defense (DOD) has made changes to its regulations, policies, and guidance and taken other steps to improve its use of incentive contracts. DOD has promoted greater use of objective incentives—which measure contractor performance toward predetermined targets using a formula—through incentive fee contracts, partly to better motivate cost control. These changes are reflected in DOD's increased use of incentive fee contracts and decreased use of award fee contracts, which involve fees paid based on a more subjective evaluation of contractor performance and have not always been linked to acquisition outcomes (see figure). DOD expects to achieve cost objectives on 15 of the 21 incentive fee contract actions that GAO reviewed and for which costs could be assessed. GAO could not assess cost performance on five additional selected incentive fee contracts because comparable cost estimates were not available. Across the 21 incentive fee contract actions, estimated costs for the incentivized portions were about 5 percent below target costs. Schedule and technical performance incentives mostly resulted in good outcomes. In two cases, however, although the contractor met specific schedule and technical performance goals, overall outcomes were either unsatisfactory or not yet determined. In the nine award fee contracts GAO reviewed, consistent with prior GAO recommendations, DOD did not allow unearned fees to be earned in a subsequent period, and GAO did not find evidence of award fees paid for unsatisfactory performance. Federal regulations require DOD to collect and evaluate information on incentives. In 2015, DOD stopped its previous effort to manually collect data twice a year on incentives valued at more than $50 million, which was burdensome and collected information that DOD did not use, according to DOD officials. GAO's review of current DOD systems found that they provide some useful data but do not allow DOD to determine how well incentives are achieving desired cost, schedule, and performance outcomes. Without such information, DOD may be disadvantaged in establishing incentive arrangements that achieve intended results. DOD should identify the type of information on incentives needed and collect and analyze relevant data to assess outcomes. DOD agreed to do so and stated it will take actions in fiscal year 2018 to address GAO's recommendation.
The overall purpose of FFMIA is to ensure that agency financial management systems comply with federal financial management systems requirements, applicable accounting standards, and the SGL in order to provide uniform, reliable, and thus more useful financial information. With such information, government leaders will be better positioned to help invest scarce resources, reduce costs, oversee programs, and hold agency managers accountable for the way they run government programs. The 1990 CFO Act laid the legislative foundation for the federal government to provide taxpayers, the nation’s leaders, and agency program managers with reliable financial information through audited financial statements. Under the CFO Act, as expanded by the Government Management Reform Act of 1994, 24 major agencies, which account for 99 percent of federal outlays, are required to annually prepare organizationwide audited financial statements beginning with those for fiscal year 1996. Table 1 lists the 24 CFO agencies and their reported fiscal year 1996 outlays. Financial audits address the reliability of information contained in financial statements, provide information on the adequacy of systems and controls used to ensure accurate financial reports and safeguard assets, and report on agencies’ compliance with laws and regulations. Building on the CFO Act audits, FFMIA requires, beginning with the fiscal year ended September 30, 1997, that each of the 24 CFO agencies’ financial statement auditors report on whether the agency’s financial management systems substantially comply with federal financial management systems requirements, applicable accounting standards, and the SGL. The financial management systems policies and standards prescribed for executive agencies to follow in developing, operating, evaluating, and reporting on financial management systems are defined in OMB Circular A-127, “Financial Management Systems,” which was revised in July 1993. Circular A-127 references the series of publications entitled Federal Financial Management Systems Requirements, issued by the Joint Financial Management Improvement Program (JFMIP), as the primary source of governmentwide requirements for financial management systems. JFMIP initially issued Core Financial System Requirements, the first document in its Federal Financial Management Systems Requirements series, in January 1988. An updated version reflecting changes in legislation and policies was released in September 1995. This document establishes the standard requirements for a core financial system to support the fundamental financial functions of an agency. Framework for Federal Financial Management Systems was published in January 1995 and describes the basic elements of a model for an integrated financial management system in the federal government, how these elements should relate to each other, and specific considerations in developing and implementing such an integrated system. In this regard, FFMIA defines financial management systems as “financial systems” and the financial portions of “mixed systems” necessary to support financial management, including automated and manual processes, procedures, controls, data, hardware, software, and support personnel dedicated to the operation and maintenance of the system. Other documents in the JFMIP series provide requirements for specific types of systems covering personnel/payroll, travel, seized/forfeited asset, direct loan, guaranteed loan, and inventory systems. Table 2 lists the publications in the Federal Financial Management System Requirements Series and their issue dates. In addition to these eight documents, JFMIP is developing additional systems requirements for managerial cost accounting. This document was issued as an exposure draft in April 1997. Federal accounting standards, which agency CFOs use in preparing financial statements and in developing financial management systems, are recommended by FASAB. In October 1990, the Secretary of the Treasury, the Director of OMB, and the Comptroller General established FASAB to recommend a set of generally accepted accounting standards for the federal government. FASAB’s mission is to recommend reporting concepts and accounting standards that provide federal agencies’ financial reports with understandable, relevant, and reliable information about the financial position, activities, and results of operations of the U.S. government and its components. FASAB recommends accounting standards after considering the financial and budgetary information needs of the Congress, executive agencies, other users of federal financial information, and comments from the public. The Secretary of the Treasury, the Director of OMB, and the Comptroller General then decide whether to adopt the recommended standards. If they do, the standards are published by OMB and GAO and become effective. As discussed further in the section “Status of Federal Accounting Standards,” this process has resulted in issuance of two statements of accounting concepts and eight statements of accounting standards. GAO published these concepts and standards in FASAB Volume 1, Original Statements, Statements of Federal Financial Accounting Concepts and Standards, in March 1997. In 1984, OMB tasked an interagency group to develop a standard general ledger chart of accounts for governmentwide use. The resulting SGL was established and mandated for use by the Department of the Treasury in 1986. Further, OMB Circular A-127, Financial Management Systems, requires agencies to record financial events using the SGL at the transaction level. The SGL provides a uniform chart of accounts and pro forma transactions used to standardize federal agencies’ financial information accumulation and processing, enhance financial control, and support budget and external reporting, including financial statement preparation. Use of the SGL improves data stewardship throughout the government, enabling consistent analysis and reporting at all levels within the agencies and at the governmentwide level. It is published in the Treasury Financial Manual. The Department of the Treasury’s Financial Management Service is responsible for maintaining the SGL. As part of a CFO agency’s annual audit, the auditor is to report whether the agency’s financial management systems substantially comply with federal financial management systems requirements, applicable accounting standards, and the SGL. If the auditor determines that an agency’s financial management systems do not substantially comply with these requirements, the act requires that the audit report (1) identify the entity or organization responsible for management and oversight of the noncompliant financial management systems, (2) disclose all facts pertaining to the failure to comply, including the nature and extent of the noncompliance, the primary reason or cause of the noncompliance, the entity or organization responsible for the noncompliance, and any relevant comments from responsible officers or employees, and (3) include recommended corrective actions and proposed time frames for implementing such actions. The act assigns to the head of an agency responsibility for determining, based on a review of the auditor’s report and any other relevant information, whether the agency’s financial management systems comply with the act’s requirements. This determination is to be made no later than 120 days after the receipt of the auditor’s report, or the last day of the fiscal year following the year covered by the audit, whichever comes first. If the head of an agency determines that the agency does not comply with the act’s requirements, the agency head, in consultation with the Director of OMB, shall establish a remediation plan that will identify, develop, and implement solutions for noncompliant systems. The remediation plan is to include corrective actions, time frames, and resources necessary to achieve substantial compliance with the act’s requirements within 3 years of the date the noncompliance determination is made. If, in consultation with the Director of OMB, the agency head determines that the agency’s financial management systems are so deficient that substantial compliance cannot be reached within 3 years, the remediation plan must specify the most feasible date by which the agency will achieve compliance and designate an official responsible for effecting the necessary corrective actions. Under the FFMIA process, the auditor’s and the agency head’s determinations of compliance may differ. In such situations, the Director of OMB will review the differing determinations and report on the findings to the appropriate congressional committees. The act also contains additional reporting requirements. OMB is required to report each year on the act’s implementation. In addition, each inspector general (IG) of the 24 CFO agencies is required to report to the Congress, as part of its semiannual report, instances in which an agency has not met the intermediate target dates established in its remediation plan and the reasons why. Efforts are underway to implement FFMIA and improve the quality of financial management systems. OMB recently issued implementation guidance in a memorandum dated September 9, 1997, for agencies and auditors to use in assessing compliance with FFMIA. This is interim guidance to be used in connection with audits of federal financial statements for fiscal year 1997. OMB’s guidance emphasizes implementation of federal financial management improvements by fully describing in separate sections each of the requirements under the act, which are (1) federal financial management systems requirements, (2) applicable federal accounting standards, and (3) the SGL at the transaction level. Each section begins by identifying and discussing the executive branch policy documents that previously established the requirement. Information is also provided on the meaning of substantial compliance and the types of indicators that should be used in assessing whether an agency is in substantial compliance. For example, one indicator of substantial compliance with financial management systems requirements would include financial management systems that meet the requirements of OMB Circular A-127. Likewise, an indicator of substantial compliance with financial accounting standards would include an agency that has no material weaknesses in internal controls that affect its ability to prepare auditable financial statements and related disclosures in accordance with federal accounting standards. Information is also provided for the auditor to consider in evaluating and reporting audit results, as well as other reporting requirements. The guidance states that the auditor shall use professional judgment in determining substantial compliance with FFMIA. Further, substantial noncompliance in any one or more of the three requirements of FFMIA would result in substantial noncompliance with FFMIA. For example, an agency could have an unqualified opinion on its financial statements indicating that the financial statements are prepared in accordance with applicable federal accounting standards, yet have financial management systems that are not in substantial compliance with financial management systems requirements. This situation would preclude the agency from being in substantial compliance with FFMIA. Finally, the guidance also directs auditors to follow the reporting guidance, with respect to compliance, contained in OMB Bulletin 93-06. We have been discussing with OMB some refinements to this bulletin, with particular focus on four areas: (1) clarifying, based on information provided in OMB’s implementation guidance, that the auditor should perform tests of the reporting entity’s compliance with the requirements of FFMIA, (2) including in the reporting entity’s management representation letter a representation about whether the reporting entity’s financial management systems are in substantial compliance with FFMIA requirements, (3) clarifying that the auditor’s report on the reporting entity’s compliance with applicable laws and regulations state that the auditor performed sufficient compliance tests of FFMIA requirements to report whether the entity’s financial management systems comply substantially with FFMIA requirements, and (4) separately stating in the auditor’s report whether such tests disclosed any instances in which the reporting entity’s financial management systems did not comply substantially with FFMIA requirements. Finally, we have discussed with OMB the requirement in the act, that if the reporting entity does not comply substantially with FFMIA requirements, the auditor’s report needs to identify the entity or organization responsible for the financial management systems that have been found not to comply with FFMIA requirements; disclose all facts pertaining to the noncompliance, including the nature and extent of the noncompliance, such as the areas in which there is substantial but not full compliance; the primary reason or cause of the noncompliance; the entity or organization responsible for the noncompliance; and any relevant comments from reporting entity management or employee responsible for the noncompliance; and state recommended remedial actions and the time frames to implement such actions. We are also exploring other tools to assist the CFO and IG communities in implementing OMB’s interim guidelines. OMB plans to review its interim guidelines and replace them during 1998 with revisions to appropriate OMB policy documents. Agencies are also taking steps to improve the quality of their financial management systems. According to the CFO Council’s and OMB’s Status Report on Financial Management Systems, dated June 1997, agencies are reporting plans to replace or upgrade operational applications within the next 5 years. For applications that are now under development or in the process of a phased implementation, reported plans are also in place to fully implement the SGL at the transaction level and comply with federal financial management system requirements. This report indicates that many agencies are also reporting considering greater use of commercial off-the-shelf software, cross-servicing, and outsourcing as they seek more effective ways to improve their financial management systems. Successful implementation of these efforts will be instrumental in achieving future compliance with FFMIA requirements. Agencies face significant challenges in achieving substantial compliance with the act’s requirements in the near future. The majority of agencies did not receive an unqualified opinion on their fiscal year 1996 financial statements. In addition, fiscal year 1996 financial management systems inventory data, self-reported by agencies and summarized in the CFO Council’s and OMB’s June 1997 Status Report on Federal Financial Management Systems, reveal that the majority of agencies’ financial systems did not comply with federal financial management systems requirements or the SGL at the transaction level prior to FFMIA’s effective date. An inability to prepare timely and accurate financial statements suggests that agencies find it difficult to effectively implement applicable federal accounting standards. A financial statement audit provides a meaningful measure of compliance with applicable federal accounting standards. An unqualified opinion is one of several indications that the agency’s financial management systems support the preparation of accurate and reliable financial statements with minimal manual intervention. However, for fiscal year 1996, only 6 of the 24 CFO agencies received unqualified opinions on their organizationwide financial statements. Further, according to OMB’s Federal Financial Management Status Report & Five-Year Plan, only 13 CFO agencies anticipate being able to obtain unqualified opinions on their fiscal year 1997 financial statements. Our past audit experience has indicated that numerous agencies’ financial management systems do not maintain and generate original data to readily prepare financial statements. Consequently, many agencies have relied on ad hoc efforts and manual adjustments to prepare financial statements. Such procedures can be time-consuming, produce inaccurate results, and delay the issuance of audited statements. In addition, agencies’ lack of reliable and consistent financial information on a regular, ongoing basis undermines federal managers’ ability to effectively evaluate the cost and performance of government programs and activities. Also, the current status of federal financial management systems portends potential problems in agencies complying fully with federal financial management systems requirements and the SGL as mandated by the act. When FFMIA was enacted, federal agencies lacked many of the basic systems needed to provide uniform and reliable financial information. Agencies are still struggling to comply with governmentwide standards and requirements, although they have recently exhibited some progress in implementing and maintaining financial management systems that comply with federal financial system requirements and the SGL. For instance, according to the CFO Council’s and OMB’s FY 1995 Status Report on Federal Financial Management Systems, issued in June 1996, only 29 percent of agencies’ financial management systems were reported to be in compliance with JFMIP federal financial management system requirements. In addition, agencies had fully implemented the SGL in only 40 percent of the operational applications to which they reported it applied. The fiscal year 1996 status report, issued in June 1997, showed some improvement, with 36 percent of agencies’ financial management systems reported as complying with federal financial management system requirements and full SGL implementation reported in 45 percent of the applications to which agencies reported it applied. However, these statistics indicate that the majority of agencies’ financial management systems still lacked compliance with financial management systems requirements and full SGL implementation in fiscal year 1996. Using a due process and consensus building approach, FASAB has successfully provided the federal government with an initial set of accounting standards. To date, FASAB has recommended, and OMB and GAO have issued, two statements of accounting concepts and eight statements of accounting standards with various effective dates ranging from fiscal year 1994 through fiscal year 1998. These concepts and standards, which are listed in table 3, underpin OMB’s guidance to agencies on the form and content of their financial statements. In addition to the two concepts and eight standards, FASAB is working on standards relating to management’s discussion and analysis of federal financial statements, social insurance, the cost of capital, natural resources, and computer software costs. The objectives of federal financial reporting are to provide users with information about budgetary integrity, operating performance, stewardship, and systems and controls. With these as the objectives of federal financial reporting, the federal government can better develop new reporting models that bring together program performance information with audited financial information and provide congressional and other decisionmakers with a more complete picture of the results, operational performance, and the costs of agencies’ operations. FFMIA is intended to improve federal accounting practices and increase the government’s ability to provide credible and reliable financial information. Such information is important in providing a foundation for formulating budgets, managing government program operations, and making difficult policy choices. Efforts are underway both in assisting agencies in implementing the act’s requirements and to assist auditors in measuring compliance with the act’s requirements. However, long-standing problems with agencies’ financial management systems suggests that agencies will have difficulty, at least in the short term, achieving compliance with the act’s requirements. Successful implementation of the act and resulting financial management improvements depend on the united effort of all organizations involved, including agency CFOs, IGs, OMB, the Department of the Treasury, and GAO. In performing our work, we evaluated OMB’s implementation guidance for FFMIA. In addition, we reviewed the CFO Council’s and OMB’s June 1997 and 1996 Status Report on Federal Financial Management Systems and OMB’s June 1997 Federal Financial Management Status Report & Five-Year Plan. We did not verify or test the reliability of the data in these reports. Further, we reviewed fiscal year 1996 audit results for the 24 CFO agencies and applicable federal accounting standards. We conducted our work from July through September 1997 at GAO headquarters in Washington, D.C. in accordance with generally accepted government auditing standards. We provided a draft of this report to OMB and Treasury and they generally concurred with its contents. We have incorporated their comments as appropriate. We are sending copies of this letter to the Chairmen and Ranking Minority Members of the Subcommittee on Oversight of Government Management, Restructuring, and the District of Columbia, Senate Committee on Governmental Affairs; the Subcommittee on Government Management, Information, and Technology, House Committee on Government Reform and Oversight; other interested congressional committees; the Director, Office of Management and Budget; the Secretary of the Treasury; heads of the 24 CFO agencies; agency CFOs and IGs; and other interested parties. We will also make copies available to others upon request. This letter was prepared under the direction of Gloria L. Jarmon, Director, Civil Audits/Health and Human Services, who may be reached at (202) 512-4476 if you or your staffs have any questions. Major contributors to this letter are listed in appendix I. Deborah A. Taylor, Assistant Director Maria Cruz, Senior Audit Manager Anastasia Kaluzienski, Audit Manager The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO provided information on: (1) the requirements of the Federal Financial Management Improvement Act (FFMIA) of 1996; (2) efforts under way to implement the act; (3) challenges that agencies face in achieving full compliance with those requirements; and (4) the status of federal accounting standards. GAO noted that: (1) it is too early to tell the extent to which the 24 agencies named in the Chief Financial Officers (CFO) Act will be in compliance with FFMIA requirements for fiscal year 1997 because auditor reports discussing the results of the fiscal year 1997 financial statement audits will generally not be available until March 1, 1998, which is the statutory reporting deadline; (2) the Office of Management and Budget (OMB) and the CFO agencies have initiated efforts to implement the act's requirements and improve financial management systems; (3) although auditors performing financial audits under the CFO Act are not required to report on FFMIA compliance until March 1, 1998, prior audit results and agency self-reporting all point to significant challenges that agencies must meet in fully implementing systems requirements, accounting standards, and the U.S. Government Standard General Ledger; (4) regarding the adequacy of accounting standards, the Federal Accounting Standards Advisory Board (FASAB) has successfully developed a good initial set of accounting standards; (5) to date, FASAB has recommended, and OMB and GAO have issued, two statements of accounting concepts and eight statements of accounting standards tailored to the federal government's unique characteristics and special needs; and (6) OMB has integrated these concepts and standards into its guidance to agencies on the form and content of their financial statements.
The Internet is a vast network of interconnected networks that is used by governments, businesses, research institutions, and individuals around the world to communicate, engage in commerce, do research, educate, and entertain. From its origins in the 1960s as a research project sponsored by the U.S. government, the Internet has grown increasingly important to both American and foreign businesses and consumers, serving as the medium for hundreds of billions of dollars of commerce each year. The Internet has also become an extended information and communications infrastructure, supporting vital services such as power distribution, health care, law enforcement, and national defense. Today, private industry—including telecommunications companies, cable companies, and Internet service providers—owns and operates the vast majority of the Internet’s infrastructure. In recent years, cyber attacks involving malicious software or hacking have been increasing in frequency and complexity. These attacks can come from a variety of actors, including criminal groups, hackers, and terrorists. Federal regulation recognizes the need to protect critical infrastructures such as the Internet. It directs federal departments and agencies to identify and prioritize critical infrastructure sectors and key resources and to protect them from terrorist attack. Furthermore, it recognizes that since a large portion of these critical infrastructures is owned and operated by the private sector, a public/private partnership is crucial for the successful protection of these critical infrastructures. Federal policy also recognizes the need to be prepared for the possibility of debilitating disruptions in cyberspace and, because the vast majority of the Internet infrastructure is owned and operated by the private sector, tasks DHS with developing an integrated public/private plan for Internet recovery. In its plan for protecting critical infrastructures, DHS recognizes that the Internet is a key resource composed of assets within both the information technology and the telecommunications sectors. It notes that the Internet is used by all critical infrastructure sectors to varying degrees and provides information and communications to meet the needs of businesses and government. In the event of a major Internet disruption, multiple organizations could help recover Internet service. These organizations include private industry, collaborative groups, and government organizations. Private industry is central to Internet recovery because private companies own the vast majority of the Internet’s infrastructure and often have response plans. Collaborative groups—including working groups and industry councils—provide information-sharing mechanisms to allow private organizations to restore services. In addition, government initiatives could facilitate response to major Internet disruptions. Federal policies and plans assign DHS lead responsibility for facilitating a public/private response to and recovery from major Internet disruptions. Within DHS, responsibilities reside in two divisions within the Preparedness Directorate: the National Cyber Security Division (NCSD) and the National Communications System (NCS). NCSD operates the U.S. Computer Emergency Readiness Team (US-CERT), which coordinates defense against and response to cyber attacks. The other division, NCS, provides programs and services that assure the resilience of the telecommunications infrastructure in times of crisis. Additionally, the Federal Communications Commission can support Internet recovery by coordinating resources for restoring the basic communications infrastructures over which Internet services run. For example, after Hurricane Katrina, the commission granted temporary authority for private companies to set up wireless Internet communications supporting various relief groups; federal, state, and local government agencies; businesses; and victims in the disaster areas. Prior evaluations of DHS’s cybersecurity responsibilities have highlighted issues and challenges facing the department. In May 2005, we issued a report on DHS’s efforts to fulfill its cybersecurity responsibilities. We noted that while DHS had initiated multiple efforts to fulfill its responsibilities, it had not fully addressed any of the 13 key cybersecurity responsibilities noted in federal law and policy. We also reported that DHS faced a number of challenges that have impeded its ability to fulfill its cyber responsibilities. These challenges included achieving organizational stability, gaining organizational authority, overcoming hiring and contracting issues, increasing awareness of cybersecurity roles and capabilities, establishing effective partnerships with stakeholders, achieving two- way information sharing with stakeholders, and demonstrating the value that DHS can provide. In this report, we also made recommendations to improve DHS’s ability to fulfill its mission as an effective focal point for cybersecurity, including recovery plans for key Internet functions. DHS agreed that strengthening cybersecurity is central to protecting the nation’s critical infrastructures and that much remained to be done, but it has not yet addressed our recommendations. The Internet’s infrastructure is vulnerable to disruptions in service due to terrorist and other malicious attacks, natural disasters, accidents, technological problems, or a combination of the above. Disruptions to Internet service can be caused by cyber and physical incidents—both intentional and unintentional. Recent physical and cyber incidents have caused localized or regional disruptions, highlighting the importance of recovery planning. However, these incidents have also shown the Internet as a whole to be flexible and resilient. Even in severe circumstances, the Internet has not yet suffered a catastrophic failure. To date, cyber attacks have caused various degrees of damage. For example, in 2001, the Code Red worm used a denial-of-service attack to affect millions of computer users by shutting down Web sites, slowing Internet service, and disrupting business and government operations. In 2003, the Slammer worm caused network outages, canceled airline flights, and automated teller machine failures. Slammer resulted in temporary loss of Internet access to some users, and cost estimates on the impact of the worm range from $1.05 billion to $1.25 billion. The federal government coordinated with security companies and Internet service providers and released an advisory recommending that federal departments and agencies patch and block access to the affected channel. However, because the worm had propagated so quickly, most of these activities occurred after it had stopped spreading. In 2002, a coordinated denial-of-service attack was launched against all of the root servers in the Domain Name System. At least nine of the thirteen root servers experienced degradation of service. However, average end users hardly noticed the attack. The attack became visible only as a result of various Internet health-monitoring projects. The response to the attacks was handled by the server operators and their service providers. The attack pointed to a need for increased capacity for servers at Internet exchange points to enable them to manage the high volumes of data traffic during an attack. If a massive disruptive attack on the domain name server system were successful, it could take several days to recover from. According to experts familiar with the attack, the government did not have a role in recovering from it. Like cyber incidents, physical incidents could affect various aspects of the Internet infrastructure, including underground or undersea cables and facilities that house telecommunications equipment, Internet exchange points, or Internet service providers. For example, on July 18, 2001, a 60-car freight train derailed in a Baltimore tunnel, causing a fire that interrupted Internet and data services between Washington and New York. The tunnel housed fiber-optic cables serving seven of the biggest U.S. Internet service providers. The fire burned and severed fiber optic cables, causing backbone slowdowns for at least three major Internet service providers. Efforts to recover Internet service were handled by the affected Internet service providers; however, local and federal officials responded to the immediate physical issues of extinguishing the fire and maintaining safety in the surrounding area, and they worked with telecommunications companies to reroute affected cables. In addition, Hurricane Katrina caused substantial destruction of the communications infrastructure in Louisiana, Mississippi, and Alabama, but it had minimal affect on the overall functioning of the Internet outside of the immediate area. According to an Internet monitoring service provider, while there was a loss of routing around the affected area, there was no significant impact on global Internet routing. According to the Federal Communications Commission, the storm caused outages for over 3 million telephone customers, 38 emergency 9-1-1 call centers, hundreds of thousands of cable customers, and over 1,000 cellular sites. However, a substantial number of the networks that experienced service disruptions recovered relatively quickly. Federal officials stated that the government took steps to respond to the hurricane, such as increasing analysis and watch services in the affected area, coordinating with communications companies to move personnel to safety, working with fuel and equipment providers, and rerouting communications traffic away from affected areas. However, private-sector representatives stated that requests for assistance, such as food, water, fuel, and secure access to facilities were denied for legal reasons; the government made time- consuming and duplicative requests for information; and certain government actions impeded recovery efforts. Since its inception, the Internet has experienced disruptions of varying scale—including fast-spreading worms, denial-of-service attacks, and physical destruction of key infrastructure components—but the Internet has yet to experience a catastrophic failure. However, it is possible that a complex attack or set of attacks could cause the Internet to fail. It is also possible that a series of attacks against the Internet could undermine users’ trust and thereby reduce the Internet’s utility. Several federal laws and regulations provide broad guidance that applies to the Internet infrastructure, but it is not clear how useful these authorities would be in helping to recover from a major Internet disruption because some do not specifically address Internet recovery and others have seldom been used. Pertinent laws and regulations address critical infrastructure protection, federal disaster response, and the telecommunications infrastructure. Specifically, the Homeland Security Act of 2002 and Homeland Security Presidential Directive 7 establish critical infrastructure protection as a national goal and describe a strategy for cooperative efforts by the government and the private sector to protect the physical and cyber-based systems that are essential to the operations of the economy and the government. These authorities apply to the Internet because it is a core communications infrastructure supporting the information technology and telecommunications sectors. However, this law and regulation do not specifically address roles and responsibilities in the event of an Internet disruption. Regarding federal disaster response, the Defense Production Act and the Stafford Act provide authority to federal agencies to plan for and respond to incidents of national significance like disasters and terrorist attacks. Specifically, the Defense Production Act authorizes the President to ensure the timely availability of products, materials, and services needed to meet the requirements of a national emergency. It is applicable to critical infrastructure protection and restoration but has never been used for Internet recovery. The Stafford Act authorizes federal assistance to states, local governments, nonprofit entities, and individuals in the event of a major disaster or emergency. However, the act does not authorize assistance to for-profit companies—such as those that own and operate core Internet components. Other legislation and regulations, including the Communications Act of 1934 and the NCS authorities, govern the telecommunications infrastructure and help to ensure communications during national emergencies. For example, the NCS authorities establish guidance for operationally coordinating with industry to protect and restore key national security and emergency preparedness communications services. These authorities grant the President certain emergency powers regarding telecommunications, including the authority to require any carrier subject to the Communications Act of 1934 to grant preference or priority to essential communications. The President may also, in the event of war or national emergency, suspend regulations governing wire and radio transmissions and authorize the use or control of any such facility or station and its apparatus and equipment by any department of the government. Although these authorities remain in force in the Code of Federal Regulations, they have been seldom used—and never for Internet recovery. Thus it is not clear how effective they would be if used for this purpose. In commenting on the statutory authority for Internet reconstitution following a disruption, DHS agreed that this authority is lacking and noted that the government’s roles and authorities related to assisting in Internet reconstitution following a disruption are not fully defined. DHS has begun a variety of initiatives to fulfill its responsibility to develop an integrated public/private plan for Internet recovery, but these efforts are not complete or comprehensive. Specifically, DHS has developed high-level plans for infrastructure protection and national disaster response, but the components of these plans that address the Internet infrastructure are not complete. In addition, DHS has started a variety of initiatives to improve the nation’s ability to recover from Internet disruptions, including working groups to facilitate coordination and exercises in which government and private industry practice responding to cyber events. While these activities are promising, some initiatives are not complete, others lack time lines and priorities, and still others lack effective mechanisms for incorporating lessons learned. In addition, the relationship between these initiatives is not evident. As a result, the nation is not prepared to effectively coordinate public/private plans for recovering from a major Internet disruption. DHS has two key documents that guide its infrastructure protection and recovery efforts, but components of these plans dealing with Internet recovery are not complete. The National Response Plan is DHS’s overarching framework for responding to domestic incidents. It contains two components that address issues related to telecommunications and the Internet, Emergency Support Function 2 and the Cyber Incident Annex. These components, however, are not complete; Emergency Support Function 2 does not directly address Internet recovery, and the annex does not reflect the National Cyber Response Coordination Group’s current operating procedures. The other key document, the National Infrastructure Protection Plan, consists of both a base plan and sector-specific plans. The base plan, which was recently released, describes the importance of cybersecurity and networks such as the Internet to critical infrastructure protection and includes an appendix that provides information on cybersecurity responsibilities. The appendix restates DHS’s responsibility to develop plans to recover Internet functions. However, the base plan is at a high level and the sector-specific plans that would address the Internet in more detail are not scheduled for release until December 2006. Several representatives of private-sector firms supporting the Internet infrastructure expressed concerns about both plans, noting that they would be difficult to execute in times of crisis. Other representatives were uneasy about the government developing recovery plans, because they were not confident of the government’s ability to successfully execute the plans. DHS officials acknowledged that it will be important to obtain input from private- sector organizations as they refine these plans and initiate more detailed public/private planning. Both the National Response Plan and National Infrastructure Protection Plan are designed to be supplemented by more specific plans and activities. DHS has numerous initiatives under way to better define its ability to assist in responding to major Internet disruptions. While these activities are promising, some initiatives are incomplete, others lack time lines and priorities, and still others lack an effective mechanism for incorporating lessons learned. DHS plans to revise the role and mission of the National Communications System (NCS) to reflect the convergence of voice and data communications, but this effort is not yet complete. A presidential advisory committee on telecommunications established two task forces that recommended changes to NCS’s role, mission, and functions to reflect this convergence, but DHS has not yet developed plans to address these recommendations. As a primary entity responsible for coordinating governmentwide responses to cyber incidents—such as major Internet disruptions— DHS’s National Cyber Response Coordination Group is working to define its roles and responsibilities, but much remains to be done. DHS officials acknowledge that the trigger to activate this group is imprecise and will need to be clarified. Because key activities to define roles, responsibilities, capabilities, and the appropriate triggers for government involvement are still under way, the group is at risk of not being able to act quickly and definitively during a major Internet disruption. Since most of the Internet is owned and operated by the private sector, NCSD and NCS established the Internet Disruption Working Group to work with the private sector to establish priorities and develop action plans to prevent major disruptions of the Internet and to identify recovery measures in the event of a major disruption. According to DHS officials who organized the group, it held its first forum, in November 2005, to begin to identify real versus perceived threats to the Internet, refine the definition of an Internet disruption, determine the scope of a planned analysis of disruptions, and identify near-term protective measures. DHS officials stated that they had identified a number of potential future plans; however, agency officials have not yet finalized plans, resources, or milestones for these efforts. US-CERT officials formed the North American Incident Response Group, which includes both public and private-sector network operators that would be the first to recognize and respond to cyber disruptions. In September 2005, US-CERT officials conducted regional workshops with group members to share information on structure, programs, and incident response and to seek ways for the government and industry to work together operationally. While the outreach efforts of the North American Incident Response Group are promising, DHS has only just begun developing plans and activities to address the concerns of private-sector stakeholders. Over the last few years, DHS has conducted several broad inter- governmental exercises to test regional responses to significant incidents that could affect the critical infrastructure. More recently, in February 2006, DHS conducted an exercise called Cyber Storm, which was focused primarily on testing responses to a cyber-related incident of national significance. Exercises that include Internet disruptions can help to identify issues and interdependencies that need to be addressed. However, DHS has not yet identified planned activities, milestones, or which group should be responsible for incorporating lessons learned from the regional and Cyber Storm exercises into its plans and initiatives. While DHS has various initiatives under way, the relationships and interdependencies between these various efforts are not evident. For example, the National Cyber Response Coordination Group, the Internet Disruption Working Group, and the North American Incident Response Group are all meeting to discuss ways to address Internet recovery, but the interdependencies between the groups have not been clearly established. Without a thorough understanding of the interrelationships between its various initiatives, DHS risks pursuing redundant efforts and missing opportunities to build on related efforts. After our report was issued, a private-sector organization released a report that examined the nation’s preparedness for a major Internet disruption. The report stated that our nation is unprepared to reconstitute the Internet after a massive disruption. The report supported our findings that significant gaps exist in government response plans and that the responsibilities of the multiple organizations that would play a role in recovery are unclear. The report also made recommendations to complete and revise response plans such as the Cyber Incident Annex of the National Response Plan; better define recovery roles and responsibilities; and establish more effective oversight and strategic direction for Internet reconstitution. Although DHS has various initiatives under way to improve Internet recovery planning, it faces key challenges in developing a public/private plan for Internet recovery, including (1) innate characteristics of the Internet that make planning for and responding to a disruption difficult, (2) lack of consensus on DHS’s role and on when the department should get involved in responding to a disruption, (3) legal issues affecting DHS’s ability to provide assistance to restore Internet service, (4) reluctance of the private sector to share information on Internet disruptions with DHS, and (5) leadership and organizational uncertainties within DHS. Until it addresses these challenges, DHS will have difficulty achieving results in its role as focal point for recovering the Internet from a major disruption. First, the Internet’s diffuse structure, vulnerabilities in its basic protocols, and the lack of agreed-upon performance measures make planning for and responding to a disruption more difficult. The components of the Internet are not all governed by the same organization. In addition, the Internet is international. According to private-sector estimates, only about 20 percent of Internet users are in the United States. Also, there are no well-accepted standards for measuring and monitoring the Internet infrastructure’s availability and performance. Instead, individuals and organizations rate the Internet’s performance according to their own priorities. Second, there is no consensus about the role DHS should play in responding to a major Internet disruption or about the appropriate trigger for its involvement. The lack of clear legislative authority for Internet recovery efforts complicates the definition of this role. DHS officials acknowledged that their role in recovering from an Internet disruption needs further clarification because private industry owns and operates the vast majority of the Internet. The trigger for the National Response Plan, which is DHS’s overall framework for incident response, is poorly defined and has been found by both us and the White House to need revision. Since private-sector participation in DHS planning activities for Internet disruption is voluntary, agreement on the appropriate trigger for government involvement and the role of government in resolving an Internet disruption is essential to any plan’s success. Private-sector officials representing telecommunication backbone providers and Internet service providers were also unclear about the types of assistance DHS could provide in responding to an incident and about the value of such assistance. There was no consensus on this issue. Many private-sector officials stated that the government did not have a direct recovery role, while others identified a variety of potential roles, including ● providing information on specific threats; ● providing security and disaster relief support during a crisis; ● funding backup communication infrastructures; ● driving improved Internet security through requirements for the government’s own procurement; ● serving as a focal point with state and local governments to establish standard credentials to allow Internet and telecommunications companies access to areas that have been restricted or closed in a crisis; ● providing logistical assistance, such as fuel, power, and security, to Internet infrastructure operators; ● focusing on smaller-scale exercises targeted at specific Internet limiting the initial focus for Internet recovery planning to key national security and emergency preparedness functions, such as public health and safety; and ● establishing a system for prioritizing the recovery of Internet service, similar to the existing Telecommunications Service Priority Program. A third challenge to planning for recovery is that there are key legal issues affecting DHS’s ability to provide assistance to help restore Internet service. As noted earlier, key legislation and regulations guiding critical infrastructure protection, disaster recovery, and the telecommunications infrastructure do not provide specific authorities for Internet recovery. As a result, there is no clear legislative guidance on which organization would be responsible in the case of a major Internet disruption. In addition, the Stafford Act, which authorizes the government to provide federal assistance to states, local governments, nonprofit entities, and individuals in the event of a major disaster or emergency, does not authorize assistance to for-profit corporations. Several representatives of telecommunications companies reported that they had requested federal assistance from DHS during Hurricane Katrina. Specifically, they requested food, water, and security for the teams they were sending in to restore the communications infrastructure and fuel to power their generators. DHS responded that it could not fulfill these requests, noting that the Stafford Act did not extend to for-profit companies. A fourth challenge is that a large percentage of the nation’s critical infrastructure—including the Internet—is owned and operated by the private sector, meaning that public/private partnerships are crucial for successful critical infrastructure protection. Although certain policies direct DHS to work with the private sector to ensure infrastructure protection, DHS does not have the authority to direct Internet owners and operators in their recovery efforts. Instead, it must rely on the private sector to share information on incidents, disruptions, and recovery efforts. Many private-sector representatives questioned the value of providing information to DHS regarding planning for and recovery from Internet disruption. In addition, DHS has identified provisions of the Federal Advisory Committee Act as having a “chilling effect” on cooperation with the private sector. The uncertainties regarding the value and risks of cooperation with the government limit incentives for the private sector to cooperate in Internet recovery-planning efforts. Finally, DHS has lacked permanent leadership while developing its preliminary plans for Internet recovery and reconstitution. In addition, the organizations with roles in Internet recovery (NCS and NCSD) have overlapping responsibilities and may be reorganized once DHS selects permanent leadership. As a result, it is difficult for DHS to develop a clear set of organizational priorities and to coordinate between the various activities necessary for Internet recovery planning. In May 2005, we reported that multiple senior DHS cybersecurity officials had recently left the department. These officials included the NCSD Director, the Deputy Director responsible for Outreach and Awareness, the Director of the US- CERT Control Systems Security Center, the Under Secretary for the Information Analysis and Infrastructure Protection Directorate and the Assistant Secretary responsible for the Information Protection Office. Additionally, DHS officials acknowledge that the current organizational structure has overlapping responsibilities for planning for and recovering from a major Internet disruption. In a July 2005 departmental reorganization, NCS and NCSD were placed in the Preparedness Directorate. NCS’s and NCSD’s responsibilities were to be placed under a new Assistant Secretary of Cyber Security and Telecommunications—in part to raise the visibility of cybersecurity issues in the department. However, almost a year later, this position remains vacant. While DHS stated that the lack of a permanent assistant secretary has not hampered its efforts in protecting critical infrastructure, several private-sector representatives stated that DHS’s lack of leadership in this area has limited progress. Specifically, these representatives stated that filling key leadership positions would enhance DHS’s visibility to the Internet industry and potentially improve its reputation. Given the importance of the Internet infrastructure to our nation’s communication and commerce, in our accompanying report we suggested matters for congressional consideration and made recommendations to DHS regarding improving efforts in planning for Internet recovery. Specifically, we suggested that Congress consider clarifying the legal framework that guides roles and responsibilities for Internet recovery in the event of a major disruption. This effort could include providing specific authorities for Internet recovery as well as examining potential roles for the federal government, such as providing access to disaster areas, prioritizing selected entities for service recovery, and using federal contracting mechanisms to encourage more secure technologies. This effort also could include examining the Stafford Act to determine whether there would be benefits in establishing specific authority for the government to provide for-profit companies—such as those that own or operate critical communications infrastructures—with limited assistance during a crisis. Additionally, to improve DHS’s ability to facilitate public/private efforts to recover the Internet in case of a major disruption, we recommended that the Secretary of the Department of Homeland Security implement the following nine actions: ● Establish dates for revising the National Response Plan—including efforts to update key components that are relevant to the Internet. ● Use the planned revisions to the National Response Plan and the National Infrastructure Protection Plan as a basis to draft public/private plans for Internet recovery and obtain input from key Internet infrastructure companies. ● Review the NCS and NCSD organizational structures and roles in light of the convergence of voice and data communications. ● Identify the relationships and interdependencies among the various Internet recovery-related activities currently under way in NCS and NCSD, including initiatives by US-CERT, the National Cyber Response Coordination Group, the Internet Disruption Working Group, the North American Incident Response Group, and the groups responsible for developing and implementing cyber recovery exercises. ● Establish time lines and priorities for key efforts identified by the Internet Disruption Working Group. ● Identify ways to incorporate lessons learned from actual incidents and during cyber exercises into recovery plans and procedures. ● Work with private-sector stakeholders representing the Internet infrastructure to address challenges to effective Internet recovery by ● further defining needed government functions in responding to a major Internet disruption (this effort should include a careful consideration of the potential government functions identified by the private sector earlier in this testimony), ● defining a trigger for government involvement in responding to such a disruption, and ● documenting assumptions and developing approaches to deal with key challenges that are not within the government’s control. In written comments, DHS agreed with our recommendations and stated that it recognizes the importance of the Internet for information infrastructures. DHS also provided information about initial actions it is taking to implement our recommendations. In summary, as a critical information infrastructure supporting our nation’s commerce and communications, the Internet is subject to disruption—from both intentional and unintentional incidents. While major incidents to date have had regional or local impacts, the Internet has not yet suffered a catastrophic failure. Should such a failure occur, however, existing legislation and regulations do not specifically address roles and responsibilities for Internet recovery. As the focal point for ensuring the security of cyberspace, DHS has initiated efforts to refine high-level disaster recovery plans; however, pertinent Internet components of these plans are not complete. While DHS has also undertaken several initiatives to improve Internet recovery planning, much remains to be done. Specifically, some initiatives lack clear timelines, lessons learned are not consistently being incorporated in recovery plans, and the relationships between the various initiatives are not clear. DHS faces numerous challenges in developing integrated public/private recovery plans—not the least of which is the fact that the government does not own or operate much of the Internet. In addition, there is no consensus among public and private stakeholders about the appropriate role of DHS and when it should get involved; legal issues limit the actions the government can take; the private sector is reluctant to share information on Internet performance with the government; and DHS is undergoing important organizational and leadership changes. As a result, the exact role of the government in helping to recover the Internet infrastructure following a major disruption remains unclear. To improve DHS’s ability to facilitate public/private efforts to recover the Internet in case of a major disruption, our report suggested that Congress consider clarifying the legal framework guiding Internet recovery. We also made recommendations to DHS to establish clear milestones for completing key plans, coordinate various Internet recovery-related activities, and address key challenges to Internet recovery planning. Effectively implementing these recommendations could greatly enhance our nation’s ability to recover from a major Internet disruption. Mr. Chairman, this concludes my statement. I would be happy to answer any questions that you or members of the subcommittee may have at this time. If you have any questions on matters discussed in this testimony, please contact us at (202) 512-9286 and at (202) 512-6412 or by e- mail at pownerd@gao.gov and rhodesk@gao.gov. Other key contributors to this testimony include Don R. Adams, Naba Barkakati, Scott Borre, Neil Doherty, Vijay D’Souza, Joshua A. Hammerstein, Bert Japikse, Joanne Landesman, Frank Maguire, Teresa M. Neven, and Colleen M. Phillips. (310829) This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since the early 1990s, growth in the use of the Internet has revolutionized the way that our nation communicates and conducts business. While the Internet originated as a U.S. government-sponsored research project, the vast majority of its infrastructure is currently owned and operated by the private sector. Federal policy recognizes the need to prepare for debilitating Internet disruptions and tasks the Department of Homeland Security (DHS) with developing an integrated public/private plan for Internet recovery. GAO was asked to summarize its report--Internet Infrastructure: DHS Faces Challenges in Developing a Joint Public/Private Recovery Plan, GAO-06-672 (Washington, D.C.: June 16, 2006). This report (1) identifies examples of major disruptions to the Internet, (2) identifies the primary laws and regulations governing recovery of the Internet in the event of a major disruption, (3) evaluates DHS plans for facilitating recovery from Internet disruptions, and (4) assesses challenges to such efforts. A major disruption to the Internet could be caused by a physical incident (such as a natural disaster or an attack that affects key facilities), a cyber incident (such as a software malfunction or a malicious virus), or a combination of both physical and cyber incidents. Recent physical and cyber incidents, such as Hurricane Katrina, have caused localized or regional disruptions but have not caused a catastrophic Internet failure. Federal laws and regulations that address critical infrastructure protection, disaster recovery, and the telecommunications infrastructure provide broad guidance that applies to the Internet, but it is not clear how useful these authorities would be in helping to recover from a major Internet disruption. Specifically, key legislation on critical infrastructure protection does not address roles and responsibilities in the event of an Internet disruption. Other laws and regulations governing disaster response and emergency communications have never been used for Internet recovery. DHS has begun a variety of initiatives to fulfill its responsibility for developing an integrated public/private plan for Internet recovery, but these efforts are not complete or comprehensive. Specifically, DHS has developed high-level plans for infrastructure protection and incident response, but the components of these plans that address the Internet infrastructure are not complete. In addition, the department has started a variety of initiatives to improve the nation's ability to recover from Internet disruptions, including working groups to facilitate coordination and exercises in which government and private industry practice responding to cyber events. However, progress to date on these initiatives has been limited, and other initiatives lack time frames for completion. Also, the relationships among these initiatives are not evident. As a result, the government is not yet adequately prepared to effectively coordinate public/private plans for recovering from a major Internet disruption. Key challenges to establishing a plan for recovering from Internet disruptions include (1) innate characteristics of the Internet that make planning for and responding to disruptions difficult, (2) lack of consensus on DHS's role and when the department should get involved in responding to a disruption, (3) legal issues affecting DHS's ability to provide assistance to restore Internet service, (4) reluctance of many in the private sector to share information on Internet disruptions with DHS, and (5) leadership and organizational uncertainties within DHS. Until these challenges are addressed, DHS will have difficulty achieving results in its role as a focal point for helping the Internet to recover from a major disruption.
To meet the challenges of ongoing operations in Iraq and Afghanistan, DOD has taken steps to increase the availability of personnel and equipment for units deploying to Iraq and Afghanistan, particularly with regard to the Army and Marine Corps. Among other things, DOD has adjusted rotation goals, and employed strategies such as to retrain units to perform missions other than those they were designed to perform. It has also transferred equipment from nondeployed units and prepositioned stocks to support deployed units. The Army and Marine Corps have refocused training to prepare deploying units for counterinsurgency missions. DOD has also relied more on Navy and Air Force personnel and contractors to help perform tasks normally handled by Army or Marine Corps personnel. Using these measures, DOD has been able to continue to support ongoing operations, but not without consequences for readiness. In the short term, ground forces are limited in their ability to train for other missions and nondeployed forces are experiencing shortages of resources. The long-term implications of DOD’s actions, such as the impact of increasing deployment times on recruiting and retention, are unclear. For the past several years, DOD has continually rotated forces in and out of Iraq and Afghanistan to maintain required force levels. While DOD’s goals generally call for active component personnel to be deployed for 1 of every 3 years and reserve component personnel involuntarily mobilized 1 of 6 years, many have been mobilized and deployed more frequently. Additionally, ongoing operations have created particularly high demand for certain ranks and occupational specialties. For example, officers and senior noncommissioned officers are in particularly high demand due to increased requirements within deployed headquarters organizations and new requirements for transition teams, which train Iraqi and Afghan forces. Several support force occupations such as engineering, civil affairs, transportation, and military police have also been in high demand. Since September 11, 2001, DOD has made a number of adjustments to its personnel policies, including those related to length of service obligations, length of deployments, frequency of reserve component mobilizations, and the use of volunteers. While these measures have helped to increase the availability of personnel in the short term, the long-term impacts of many of these adjustments are uncertain. For example, the Army has successively increased the length of deployments in Iraq—from 6 to 12 and eventually to 15 months. Also, the services have, at various times, used “stop-loss” policies, which prevent personnel from leaving the service, and DOD has made changes to reserve component mobilization policies. In the latter case, DOD modified its policy, which had previously limited the cumulative amount of time that reserve component servicemembers could be involuntarily called to active duty for the Global War on Terrorism. Under DOD’s new policy, which went into effect in January 2007, there are no cumulative limits on these involuntary mobilizations, but DOD has set goals to limit the mobilizations to 12 months and to have 5 years between these Global War on Terrorism involuntary mobilizations. DOD has also stated that in the short term it will not be able to meet its goal for 5 years between rotations. By making these adjustments, DOD has made additional personnel available for deployment, thus helping to meet short- term mission requirements in Iraq and Afghanistan. However, it is unclear whether longer deployments or more frequent involuntary mobilizations or other adjustments will affect recruiting and retention. In the near term, the Army and Marine Corps have taken a number of steps to meet operational requirements and mitigate the stress on their forces. Such actions include deploying units from branches with lower operational tempos in place of units from branches with higher operational tempos after conducting some additional training for the units. For example, after retraining units, the Army has used active component field artillery units for convoy escort, security, and gun truck missions and has used active and reserve component quartermaster units to provide long-haul bulk fuel support in Iraq. As we have reported, ongoing military operations in Iraq and Afghanistan combined with harsh combat and environmental conditions are inflicting heavy wear and tear on equipment items that, in some cases, are more than 20 years old. In response to the sustained operations in Iraq and Afghanistan, the Army and Marine Corps developed programs to reset (repair or replace) equipment to return damaged equipment to combat- ready status for current and future operations. We also have reported that while the Army and Marine Corps continue to meet mission requirements and report high readiness rates for deployed units, nondeployed units have reported a decrease in reported readiness rates, in part due to equipment shortages. Some units preparing for deployment have reported shortages of equipment on hand as well as specific equipment item shortfalls that affect their ability to carry out their missions. The Army Chief of Staff has testified that the Army has had to take equipment from nondeployed units in order to provide it to deployed units. The Marine Corps has also made trade-offs between preparing units to deploy to Iraq and Afghanistan and other unit training. In addition, the Army National Guard and Army Reserve have transferred large quantities of equipment to deploying units, which has contributed to equipment shortages in nondeployed units. As a result, state officials have expressed concerns about their National Guard’s equipment that would be used for domestic requirements. To meet current mission requirements, the services, especially the Army and the Marine Corps, have focused unit training on counterinsurgency tasks. Given limitations in training time, and the current focus on preparing for upcoming, scheduled deployments, nondeployed troops are spending less training time on their core tasks than in the past. Our analysis of Army unit training plans and discussions with training officials indicate that unit commanders’ training plans have focused solely on preparing for their unit’s assigned mission instead of moving progressively from preparing for core missions to training for full-spectrum operations. Since February 2004, all combat training rotations conducted at the Army’s National Training Center have been mission rehearsal exercises to prepare units for deployments, primarily to Iraq and Afghanistan. As a result, units are not necessarily developing and maintaining the skills for a fuller range of missions. For instance, units do not receive full-spectrum operations training such as combined arms maneuver and high-intensity combat. In addition, the Army has changed the location of some training. According to Army officials, the National Training Center has provided home station mission rehearsal exercises at three Army installations, but these exercises were less robust and on a smaller scale than those conducted at the center. Army leaders have noted that the limited time between deployments has prevented their units from completing the full-spectrum training that the units were designed and organized to perform. The Chief of Staff of the Army recently stated that units need 18 months between deployments to be able to conduct their entire full-spectrum mission training. While the Chairman of the Joint Chiefs of Staff expressed concerns about the impact of the current operational tempo on full- spectrum training during his testimony last week, he also noted that the military is capable of responding to all threats to our vital national interests. The Army’s decision to remove equipment from its prepositioned ships impacts its ability to fill equipment shortages in nondeployed units and could impact DOD’s ability to meet other demands if new demands were to cause requirements to rise above current levels to new peaks. The Army’s decision to accelerate the creation of two additional brigade combat teams by removing equipment from prepositioned ships in December 2006 helps the Army to move toward its deployment rotation goals. However, the lack of prepositioned equipment means that deploying units will either have to deploy with their own equipment or wait for other equipment to be assembled and transported to their deployment location. Either of these options could slow deployment response times. The most recent DOD end-to-end mobility analysis found that the mobility system could continue to sustain the current (post 9/11) tempo of operations with acceptable risk. The study found that when fully mobilized and augmented by the Civil Reserve Air Fleet and the Voluntary Intermodal Sealift Agreement ships, the United States has sufficient capability to support national objectives during a peak demand period with acceptable risk. The study highlighted the need for DOD to continue actions to reset and reconstitute prepositioned assets. However, some prepositioned stocks have been depleted. Since portions of the Army’s prepositioned equipment are no longer available, transportation requirements may increase and risk levels may increase, which could increase timelines for delivery of personnel and equipment. Shortly after September 11, 2001, the Army’s pace of operations was relatively low, and it was generally able to meet combatant commander requirements with its cadre of active duty and reserve component personnel. For example, in the aftermath of September 11, 2001, the President, through the Secretary of Defense and the state governors, used Army National Guard forces to fill security roles both at Air Force bases and domestic civilian airports. Today, with the Army no longer able to meet the deployment rotation goals for its active and National Guard and Reserve forces due to the pace of overseas operations, DOD is increasingly turning to the Navy and the Air Force to help meet requirements for skills typically performed by ground forces. The Navy and Air Force are filling many of these traditional Army ground force requirements with personnel who possess similar skills to the Army personnel they are replacing. According to Air Force and Navy testimony before this committee in July 2007, some examples of the personnel with similar skills included engineers, security forces, chaplains, and public affairs, intelligence, medical, communications, logistics, and explosive ordnance disposal personnel. The Navy and Air Force are also contributing personnel to meet emerging requirements for transition teams to train Iraqi and Afghan forces. Regardless of whether they are filling new requirements or just operating in a different environment with familiar sets of skills, Navy and Air Force personnel undergo additional training prior to deploying for these nontraditional assignments. While we have not verified the numbers, according to the July 2007 testimonies, the Air Force and Navy deployments in support of nontraditional missions had grown significantly since 2004 and at the time of the testimonies the Air Force reported that it had approximately 6,000 personnel filling nontraditional positions in the Central Command area of responsibility, while the Navy reported that it had over 10,000 augmentees making significant contributions to the Global War on Terror. Finally, the Air Force testimony noted that many personnel who deployed for these nontraditional missions came from stressed career fields—security force, transportation, air traffic control, civil engineering, and explosive ordnance disposal—that were not meeting DOD’s active force goal of limiting deployments to 1 in every 3 years. The U.S. military has long used contractors to provide supplies and services to deployed U.S. forces; however, the scale of contractor support in Iraq is far greater than in previous military operations, such as Operation Desert Shield/Desert Storm and in the Balkans. Moreover, DOD’s reliance on contractors continues to grow. In December 2006, the Army estimated that almost 60,000 contractor employees supported ongoing military operations in Southwest Asia. In October 2007, DOD estimated the number of DOD contractors in Iraq to be about 129,000. By way of contrast, an estimated 9,200 contractor personnel supported military operations in the 1991 Gulf War. In Iraq, contractors provide deployed U.S. forces with an almost endless array of services and support, including communication services; interpreters who accompany military patrols; base operations support (e.g., food and housing); maintenance services for both weapon systems and tactical and nontactical vehicles; intelligence analysis; warehouse and supply operations; and security services to protect installations, convoys, and DOD personnel. Factors that have contributed to this increase include reductions in the size of the military, an increase in the number of operations and missions undertaken, a lack of organic military capabilities, and DOD’s use of increasingly sophisticated weapons systems. DOD has long recognized that contractors are necessary to successfully meet current and future requirements. In 1990, DOD issued guidance that requires DOD components to determine which contracts provide essential services and gives commanders three options if they cannot obtain reasonable assurance of continuation of essential services by a contractor: they can obtain military, DOD civilian, or host-nation personnel to perform services; they can prepare a contingency plan for obtaining essential services; or they can accept the risk attendant with a disruption of services during a crisis situation. While our 2003 report found that DOD has not taken steps to implement the 1990 guidance, DOD officials informed us that DOD has awarded a contract to deploy planners to the combatant commands. According to the DOD officials, the planners will focus on the contractor support portions of the operational plans, including requirements for contractor services. In addition, the planners will streamline the process through which the combatant commander can request requirements definition, contingency contracting, or program management support. DOD officials report that, as of February 7, 2008, eight planners have been deployed. Without firm contingency plans in place or a clear understanding of the potential consequences of not having the essential service available, the risks associated with meeting future requirements increase. Given the change in the security environment since September 11, 2001, and related increases in demands on our military forces as well as the ongoing high level of commitment to ongoing operations, rebuilding readiness of U.S. ground forces is a long-term prospect. In addition, the department faces competing demands for resources given other broad- based initiatives to grow, modernize, and transform its forces, and therefore will need to carefully validate needs and assess trade-offs. While there are no quick fixes to these issues, we believe the department has measures it can take that will advance progress in both the short and long terms. Over the past several years, we have reported and testified on a range of issues related to military readiness and made multiple recommendations aimed at enhancing DOD’s ability to manage and improve military readiness. DOD faces significant challenges in rebuilding readiness while it remains engaged in ongoing operations. At the same time, it has undertaken initiatives to increase the size of U.S. ground forces, and modernize and transform force capabilities, particularly in the Army. Although the cost to rebuild the U.S. ground forces is uncertain, it will likely require billions of dollars and take years to complete. For example, once operations end, the Army has estimated it will take $12 billion to $13 billion a year for at least 2 years to repair, replace, and rebuild its equipment used for operations in Iraq. Similarly, the Marine Corps has estimated it will cost about $2 billion to $3 billion to reset its equipment. Furthermore, current plans to grow, modernize, and transform the force will require hundreds of billions of dollars for the foreseeable future. Although the Army estimated in 2004 that it could largely equip and staff modular units by spending $52.5 billion through fiscal year 2011, the Army now believes it will require additional funding through fiscal year 2017 to fully equip its units. In addition, we found that the Army’s $70 billion funding plan to increase its end strength by over 74,200 lacks transparency and may be understated because some costs were excluded and some factors are still evolving that could potentially affect this funding plan. We have also reported that the costs of the Army’s Future Combat System are likely to grow. While the Army has only slightly changed its cost estimate of $160.7 billion since last year, independent cost estimates put costs at between $203 billion and nearly $234 billion. While our testimony today is focused on the readiness of the Army and Marine Corps, we recognize that DOD is continuing to deal with determining the requirements, size, and readiness of the Air Force and Navy and that Congress is engaged with that debate. The Air Force for example, is dealing with balancing the requirements and funding for strategic and intratheater lift as well as its needs for aerial refueling aircraft, tactical aircraft, and a new bomber fleet. The Navy is also reviewing its requirements and plans to modernize its fleet. Meeting these requirements will involve both new acquisitions as well upgrades to existing fleets, which will cost billions of dollars. A common theme in our work has been the need for DOD to take a more strategic approach to decision making that promotes transparency and ensures that programs and investments are based on sound plans with measurable goals, validated requirements, prioritized resource needs, and performance measures to gauge progress against the established goals. Due to the magnitude of current operational commitments and the readiness concerns related to the ground forces, we believe decision makers need to take a strategic approach in assessing current conditions and determining how best to rebuild the readiness of the Army and Marine Corps. As a result, in July 2007, we recommended that DOD develop near- term plans for improving the readiness of its active and reserve component ground forces, and specify the number of ground force units they plan to maintain at specific levels of readiness as well as the time frames for achieving these goals. Because significant resources will be needed to provide the personnel, equipment, and training necessary to restore and maintain readiness, and because DOD is competing for resources in an increasingly fiscally constrained environment, we also recommended that the plans contain specific investment priorities, prioritized actions that the services believe are needed to achieve the plans’ readiness goals and time frames, and measures to gauge progress in improving force readiness. Such plans would be helpful to guide decision makers in considering difficult trade-offs when determining funding needs and making resource decisions. We have also recommended that DOD and the services take specific actions in a number of areas I have discussed today. These recommendations are contained in the products listed at the end of my statement. In summary The services need to collect and maintain comprehensive data on the various strategies they use to meet personnel and unit requirements for ongoing operations and determine the impact of these strategies on the nondeployed force. The Army needs to develop planning and funding estimates for staffing and equipping the modular force as well as assess its modular force. The Army needs to provide to Congress transparent information on its plan to increase the force size, including data on the force structure to be created by this initiative, implementation timelines, cost estimates, and a funding plan. DOD needs to identify mission essential services provided by contractors and include them in planning, and also develop doctrine to help the services manage contractors supporting deployed forces. The Army needs to revise and adjust its training strategy to include a plan to support full-spectrum training during extended operations, and clarify the capacity needed to support the modular force. DOD must develop a strategy and plans for managing near-term risks and management challenges related to its prepositioning programs. DOD must improve its methodology for analyzing mobility capabilities requirements to include development of models and data, an explanation of the impact of limitations on study results, and metrics in determining capabilities. DOD agreed with some recommendations, but has yet to fully implement them. For others, particularly when we recommended that DOD develop more robust plans linked to resources, DOD believed its current efforts were sufficient. We continue to believe such plans are needed. Given the challenges facing the department, we believe these actions will enhance DOD’s ability to validate requirements, develop plans and funding needs, identify investment priorities and trade-offs, and ultimately to embark on a sustainable path to rebuild readiness and move forward with plans to modernize and transform force capabilities. In the absence of a strategic approach based on sound plans and measurable outcomes, neither Congress nor the department can be assured that it will have the information it needs to make informed investment decisions and to ensure that it is maximizing the use of taxpayer dollars in both the short and long terms. Mr. Chairman and Members of the Committee, this concludes my statement. I would be pleased to respond to any question you or other Members of the Committee or Subcommittee may have. For questions regarding this testimony, please call Sharon L. Pickup at (202) 512-9619 or pickups@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Military Operations: Implementation of Existing Guidance and Other Actions Needed to Improve DOD’s Oversight and Management of Contractors in Future Operations. GAO-08-436T. Washington, D.C.: January 24, 2008. Force Structure: Need for Greater Transparency for the Army’s Grow the Force Initiative Funding Plan. GAO-08-354R. Washington, D.C.: January 18, 2008. Force Structure: Better Management Controls Are Needed to Oversee the Army’s Modular Force and Expansion Initiatives and Improve Accountability for Results. GAO-08-145. Washington, D.C.: December 14, 2007. Defense Logistics: Army and Marine Corps Cannot Be Assured That Equipment Reset Strategies Will Sustain Equipment Availability While Meeting Ongoing Operational Requirements. GAO-07-814. Washington, D.C.: September 19, 2007. Military Training: Actions Needed to More Fully Develop the Army’s Strategy for Training Modular Brigades and Address Implementation Challenges. GAO-07-936. Washington, D.C.: August 6, 2007. Military Personnel: DOD Lacks Reliable Personnel Tempo Data and Needs Quality Controls to Improve Data Accuracy. GAO-07-780. Washington, D.C.: July 17, 2007. Defense Acquisitions: Key Decisions to Be Made on Future Combat System. GAO-07-376. Washington, D.C.: March 15, 2007. Defense Logistics: Improved Oversight and Increased Coordination Needed to Ensure Viability of the Army’s Prepositioning Strategy. GAO- 07-144. Washington, D.C.: February 15, 2007. Defense Logistics: Preliminary Observations on the Army’s Implementation of Its Equipment Reset Strategies. GAO-07-439T. Washington, D.C.: January 31, 2007. Reserve Forces: Actions Needed to Identify National Guard Domestic Equipment Requirements and Readiness. GAO-07-60. Washington, D.C.: January 26, 2007. Securing, Stabilizing, and Rebuilding Iraq: Key Issues for Congressional Oversight. GAO-07-308SP. Washington, D.C.: January 9, 2007. Defense Transportation: Study Limitations Raise Questions about the Adequacy and Completeness of the Mobility Capabilities Study and Report. GAO-06-938. Washington, D.C.: September 20, 2006. Defense Logistics: Preliminary Observations on Equipment Reset Challenges and Issues for the Army and Marine Corps. GAO-06-604T. Washington, D.C.: March 30, 2006. Defense Logistics: Better Management and Oversight of Prepositioning Programs Needed to Reduce Risk and Improve Future Programs. GAO- 05-427. Washington, D.C.: September 6, 2005. Military Personnel: DOD Needs to Address Long-term Reserve Force Availability and Related Mobilization and Demobilization Issues. GAO- 04-1031. Washington, D.C.: September 15, 2004. Military Operations: Contractors Provide Vital Services to Deployed Forces but Are Not Adequately Addressed in DOD’s Plans. GAO-03-695. Washington, D.C.: June 24, 2003. Military Personnel: DOD Actions Needed to Improve the Efficiency of Mobilizations for Reserve Forces. GAO-03-921. Washington, D.C.: August 21, 2003. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
U.S. military forces, and ground forces in particular, have operated at a high pace since the attacks of September 11, 2001, including to support ongoing operations in Iraq and Afghanistan. Between 2001 and July 2007, approximately 931,000 U.S. Army and Marine Corps servicemembers deployed for overseas military operations, including about 312,000 National Guard or Reserve members. To support ongoing military operations and related activities, Congress has appropriated billions of dollars since 2001, and through September 2007, the Department of Defense (DOD) has reported obligating about $492.2 billion to cover these expenses, of which a large portion are related to readiness. In addition, DOD's annual appropriation, now totaling about $480 billion for fiscal year 2008, includes funds to cover readiness needs. GAO was asked to testify on (1) the readiness implications of DOD's efforts to support ongoing operations; and (2) GAO's prior recommendations related to these issues, including specific actions that GAO believes would enhance DOD's ability to manage and improve readiness. This statement is based on reports and testimonies published from fiscal years 2003 through 2008. GAO's work was conducted in accordance with generally accepted government auditing standards. While DOD has overcome difficult challenges in maintaining a high pace of operations over the past 6 years and U.S. forces have gained considerable combat experience, our work has shown that extended operations in Iraq and elsewhere have had significant consequences for military readiness, particularly with regard to the Army and Marine Corps. To meet mission requirements specific to Iraq and Afghanistan, the department has taken steps to increase the availability of personnel and equipment for deploying units, and to refocus their training on assigned missions. For example, to maintain force levels in theater, DOD has increased the length of deployments and frequency of mobilizations, but it is unclear whether these adjustments will affect recruiting and retention. The Army and Marine Corps have also transferred equipment from nondeploying units and prepositioned stocks to support deploying units, affecting the availability of items for nondeployed units to meet other demands. In addition, they have refocused training such that units train extensively for counterinsurgency missions, with little time available to train for a fuller range of missions. Finally, DOD has adopted strategies, such as relying more on Navy and Air Force personnel and contractors to perform some tasks formerly handled by Army or Marine Corps personnel. If current operations continue at the present level of intensity, DOD could face difficulty in balancing these commitments with the need to rebuild and maintain readiness. Over the past several years, GAO has reported on a range of issues related to military readiness and made numerous recommendations to enhance DOD's ability to manage and improve readiness. Given the change in the security environment since September 11, 2001, and demands on U.S. military forces in Iraq and Afghanistan, rebuilding readiness will be a long-term and complex effort. However, GAO believes DOD can take measures that will advance progress in both the short and long terms. A common theme is the need for DOD to take a more strategic decision-making approach to ensure programs and investments are based on plans with measurable goals, validated requirements, prioritized resource needs, and performance measures to gauge progress. Overall, GAO recommended that DOD develop a near-term plan for improving the readiness of ground forces that, among other things, establishes specific goals for improving unit readiness, prioritizes actions needed to achieve those goals, and outlines an investment strategy to clearly link resource needs and funding requests. GAO also made recommendations in several specific readiness-related areas, including that DOD develop equipping strategies to target shortages of items required to equip units preparing for deployment, and DOD adjust its training strategies to include a plan to support full-spectrum training. DOD agreed with some recommendations, but has yet to fully implement them. For others, particularly when GAO recommended that DOD develop more robust plans linked to resources, DOD believed its current efforts were sufficient. GAO continues to believe such plans are needed.
The U.S. Social Security program’s projected long-term financing shortfall stems primarily from the fact that people are living longer and having fewer children. As a result, the number of workers paying into the system for each beneficiary is projected to decline. A similar demographic trend is occurring or will occur in all OECD countries. (See table 3 in app. II for demographic and other characteristics of OECD countries and Chile.) Although the number of workers for every elderly person (aged 65 and over) in the United States has been relatively stable over the past few decades, this ratio has already fallen substantially in other developed countries. The number of workers for every elderly person in the United States is projected to fall from 4.1 in 2005 to 2.9 in 2020 and to 2.2 in 2030. In nine of the OECD countries, this number has already fallen below the level projected for the United States in 2020. These decreases in the projected number of workers available to support each retiree could have significant effects on countries’ economies, particularly during the period from 2010 to 2030. These effects may slow growth in the economy and standards of living, and increase costs for aging-related government programs. Long run demographic projections are imprecise, however, due to uncertainty about future changes in longevity, for example. Although social security programs in all OECD countries and Chile provide benefits for qualified elderly people, survivors, and the disabled, the programs differ in many respects across the countries. In some countries, “social security” refers to a wide range of social insurance programs, including health care, long-term care, workers’ compensation, unemployment insurance, and so forth. To generalize across countries, we use “national pensions” to refer to mandatory countrywide pension programs providing old-age pensions. We use “Social Security” to refer to the U.S. Old-Age, Survivors, and Disability Insurance Program, since that is how the program is commonly known. Although nearly all OECD countries use contributions from workers and employers designated to finance pension benefits, most also use general revenues as a funding source. Nearly all OECD countries, including the United States, make pension benefits dependent on an individual’s work history, while several also provide benefits to all qualified residents whether or not they have a work history. Countries’ pension systems may be financed differently, use different criteria for identifying qualified beneficiaries, and calculate benefits in a different manner. Several OECD countries finance benefits to the disabled or survivors with worker or employer contributions designated for this purpose. Others use general revenue to finance these benefits or have a single fund that provides old- age pension benefits and benefits for the disabled and survivors. Some OECD countries provide a universal benefit of a specified amount each week or each month. Some adjust benefits based on time spent raising children, or pursuing education, as well as years spent working. Some national pension programs, identified as “defined benefit” programs, provide retirees a pension of a set amount per week or month or an amount calculated based on such factors specified by law as their number of years of work and the level of their earnings or contributions, and their age at retirement. Other national pension programs, identified as “defined contribution” programs, provide retirees income based on the accumulated value of contributions and investment earnings on those contributions. Many OECD countries have a pension system that includes a combination of pension programs rather than a single program, providing many retirees with more than one source of income. (See table 4 in app. II for additional information concerning countries’ national pension systems.) Voluntary occupational pension programs are common in many OECD member countries though the aggregate accumulated value of these pension funds exceeds 25 percent of gross domestic product (GDP) in only seven countries, including the United States, Canada, and Denmark. These include programs sponsored by employers, trade associations, or trade unions and regulated by governments; in some cases, the pensions provided by these programs are to some extent insured by governmental entities—counterparts to the Pension Benefit Guaranty Corporation in the United States. Tax laws in many countries encourage participation in these voluntary programs. Germany, for example, supplements its national pension system with voluntary individual “Riester” accounts, supported by subsidies as well as tax incentives. Our review, however, did not include these programs, except in the United Kingdom, where workers’ contributions to PAYG national pension programs are reduced if they choose to participate. Where these voluntary programs are prevalent, they can affect countries’ decisions about public pension reforms. Historically, developed countries have relied on some form of a PAYG national pension program. Over time, countries have used a variety of approaches to respond to demographic challenges and the ensuing increases in expenditures for these programs. In many cases, these approaches provide a basic or minimum benefit as well as a benefit based on the level of a worker’s earnings. Several countries are preparing to pay future benefits by either supplementing or replacing their PAYG programs. For example, some have set aside and invested current resources in a national pension reserve fund to partially prefund their PAYG program. Some have established fully funded individual accounts. These are not mutually exclusive types of reform. In fact, many countries have undertaken more than one of the following types of reform (see table 1 for the reforms OECD countries and Chile have undertaken): Adjustments to existing pay-as-you-go systems. Typically, these are designed to create a more sustainable program by increasing contributions or decreasing benefits, or both, while preserving the basic structure of the system. Measures include phasing in higher retirement ages, equalizing retirement ages across genders, and increasing the earnings period over which initial benefits are calculated. Some countries have created notional defined contribution (NDC) accounts for each worker, which tie benefits more closely to each worker’s contributions and to factors such as life expectancy and the growth rate of the economy. National pension reserve funds. These are set up to partially prefund PAYG national pension programs. Governments commit to make regular transfers to these investment funds from, for example, budgetary surpluses. To the extent that these contribute to national saving, they reduce the need for future borrowing or for large increases in contributions to pay scheduled benefits. Funds can be invested in a combination of government securities and domestic as well as foreign private sector securities. Because of differences in accounting practices, some countries report reserve funds as part of national budgets while others do not include them in federal figures. Individual accounts. These are fully funded accounts that are administered either by employers, the government, or designated third parties and are owned by the individual. The level of retirement benefits depends largely on the amount of contributions made by, or on behalf of, an individual into the account during his or her working life, investment earnings, and the amount of fees individuals are required to pay. The countries that have adjusted their existing PAYG national pension programs demonstrate a broad range of approaches for both reducing benefits and increasing contributions in order to improve the programs’ financial sustainability. Their experiences also provide lessons about the potential effects of some adjustments on the distribution of benefits, including the maintenance of a safety net and incentives to work and save. They also emphasize the care required in implementing and administering reforms and ensuring that the public understands the new provisions. To reconcile PAYG program revenue and expenses, nearly all the countries we studied have decreased benefits, and most have also increased contributions, often in part by increasing retirement ages. Generally countries with national pension programs that are relatively financially sustainable, based on estimated changes in spending on old-age pensions, have undertaken a package of several far-reaching adjustments. Most of the countries we studied increased program revenue by raising contribution rates, increasing the range of earnings or kinds of earnings subject to contribution requirements, or increasing the retirement age. Most of these countries increased contribution rates for some or all workers. Canada, for example, increased contributions to its Canadian Pension Plan from a total of 5.85 percent to 9.9 percent of wages, half paid by employers and half by employees. Several countries, including the United Kingdom, increased contributions by expanding the range of earnings subject to contribution requirements. Nearly all of the countries we studied decreased the promised level of benefits provided to future retirees, using a wide range of techniques. Some techniques reduce the level of initial benefits; others reduce the rate at which benefits increase during retirement or adjust benefits based on retirees’ financial means. Increased years of earnings. To reduce initial benefits, several countries increased the number of years of earnings they consider in calculating an average lifetime earnings level. France previously based its calculation on 10 years but increased this to 25 years for its basic public program. Austria is gradually increasing the number of years from 15 to 40 years. Increased minimum years of contributions. Another approach is to increase the minimum number of years of creditable service required to receive a benefit. France increased the required number of years from 37.5 to 40 years. Belgium is increasing its minimum requirement for early retirement from 20 to 35 years. Changed formula for calculating benefits. Another approach to decreasing the initial benefit is to change the formula for adjusting prior years’ earnings. Countries with traditional PAYG programs all make some adjustment to the nominal amount of wages earned previously to reflect changes in prices or wages over the intervening years. Although most of the countries we studied use some kind of average wage index, others, including Belgium and France, have adopted the use of price indexes. The choice of a wage or price index can have quite different effects, depending on the rate at which wages increase in comparison with prices. The extent to which wages outpace prices varies over time and among countries. Changed basis for determining year-to-year increases in benefits once retirement begins. In many of the countries we studied, the rate at which monthly retirement benefits increase from year to year during retirement is based on increases in prices, which generally rise more slowly than earnings. Others—including Denmark, Ireland, Luxembourg, and the Netherlands—use increases in earnings or a combination of wage and price indexes. Hungary, for example, changed from the use of a wage index to the Swiss method—an index weighted 50 percent on price changes and 50 percent on changes in earnings. Implemented provisions that adjust benefits in response to economic and demographic changes. Adjustments, which link benefits to factors such as economic growth, longevity, or the ratio of workers to retirees, may contribute to the financial sustainability of national pension systems. Finland and Germany, for example, have adopted adjustment mechanisms of this kind. In some countries, such as Italy and Sweden, this approach takes the form of a notional defined contribution program. Italian and Swedish workers have “notional” accounts in that both the incoming contributions and the investment earnings exist only on the books of the managing institution. At retirement, the accumulated notional capital in each account is converted to a stream of pension payments using a formula based on factors such as life expectancy at the time of retirement. Most of the countries we studied undertook more than one of these types of reforms as indicated in table 2. See table 5 in appendix II for additional information concerning adjustments to PAYG programs. Several countries, such as Sweden and the United Kingdom, have undertaken one or more of these adjustments to their PAYG programs and have achieved, or are on track to achieve, relative financial sustainability. Other countries, including France, and Germany, may need to take additional action to finance future benefit commitments. Generally, the countries that have come closest to achieving sustainability are those that have undergone more than one type of national pension reform. All of the countries we studied that reformed their PAYG pension programs by reducing projected benefits included provisions to help ensure adequate benefits for lower-income groups and put into place programs designed to ensure that all qualified retirees have a minimum level of income. Most did so by providing a means-tested program that provides more benefits to retirees with limited financial means. Two countries—Germany and Italy—provide retirees access to general social welfare programs that are available to people of all ages rather than providing programs with different provisions for elderly people. Twelve countries use another approach to providing a safety net: a basic retirement benefit. The level of the benefit is either a given amount per month for all retirees or an amount based on years of contributions to the program (but not the level of earnings during those years). In Ireland, for example, workers who contribute to the program for a specified period receive a flat-rate pension equal to about 167 euros a week in 2004— approximately one-third of average earnings. According to the Social Security Administration (SSA), Chile set a minimum pension for those younger than age 70 at 62.7 percent of the minimum wage in 2004. The United Kingdom and Belgium give low-income workers credit equivalent to the minimum level contribution even though their earnings were too low to require a contribution. Several countries give workers credit for years in which they were unemployed, pursued postsecondary education, or cared for dependents. Establishing a safety net requires a careful consideration of costs and incentives for working and saving. Costs can be high if a generous basic pension is provided to all eligible retirees regardless of their income. On the other hand, means-tested benefits can diminish incentives to work and save. The United Kingdom provides both a basic state pension and a means-tested pension benefit. Concern about the decline in the proportion of preretirement earnings provided by the basic state pension has led some to advocate making it more generous. Others argue that focusing safety net spending on those in need enables the government to alleviate pensioner poverty in a cost-effective manner. Prior to 2003, retirees in the United Kingdom received a means-tested benefit that brought their income up to a guaranteed minimum retirement income level. This benefit left those retirees with low to moderate incomes no financial incentive to work or save, because additional income was offset by equal reductions in the means-tested benefit. To help remedy this, the United Kingdom introduced the savings credit, which provides a supplementary benefit equal to a portion of an individual’s additional income within a range near the guaranteed retirement income level. This new benefit increases, but does not fully restore, the incentive to work and save because a portion of the additional income is lost through reductions in pension income. If, for example, a retiree with pre-benefit income of $700 a month increases this income to $800 a month, his or her total retirement income including these means-tested benefits would increase by $60, from $892 to $952. The proportion of United Kingdom pensioners eligible for these means-tested benefits is expected to rise. The United Kingdom Pensions Commission projects that unless current pension rules are changed, almost 65 percent of retiree households will be eligible for these means-tested benefits by 2050, because the increases in the Basic State Pension are linked to prices, but increases in other components of the United Kingdom’s pension system are linked to earnings. The extent to which new provisions are implemented, administered, and explained to the public may affect the outcome of the reform. Although many adjustments to PAYG programs are not difficult to implement and administer, some more complex reforms, such as the development of a notional defined contribution program, can be challenging. Poland, for example, adopted NDC reform in 1999, but the development of a data system to track contributions has been problematic. As of early 2004, the system generated statements documenting contributions workers made during 2002, but there was no indication of what workers had contributed in earlier years or to previous pension programs. Without knowing how much they have in their notional defined accounts, workers may have a difficult time planning for their retirement. Additionally, countries typically phase in certain changes, such as increasing the retirement age. This could help to provide workers with enough time to understand how the changes to the pension program will affect their retirement income. To educate workers about how PAYG programs and PAYG reforms affect them, countries including Canada, Sweden, the United Kingdom, and the United States, send workers periodic statements concerning the program, the record of their contributions to it, and the benefits they are projected to receive. To increase the likelihood that recipients will read and understand them, the United Kingdom provides different messages tailored to workers of different ages. Nonetheless, the United Kingdom has had limited success in efforts to educate workers about changes in provisions that will affect their retirement income. For example, a survey of women in the United Kingdom showed that only about 43 percent of women who will be affected by an increase in the retirement age knew the age that applied to them. A large proportion (70 percent) of younger women indicated that they expect to retire before their state pension eligibility age—65 years. Another measure often found in reform packages is the accumulation of reserves in national pension funds with the aim of partially prefunding PAYG pension programs. With public centralized prefunding, governments set aside resources in the current period to safeguard the financing of their PAYG pension programs in the future. Typically invested in various combinations of bonds and equities, these reserve funds are in some cases meant to remain untouched for several years before being channeled into the public pension system, in particular to maintain adequate pension levels for the baby boom cohort. Pension reserve funds can contribute to the system’s financial sustainability, depending on when they are created or reformed, as well as how they are invested and managed. Countries that took action early have had time to amass substantial reserves, reducing the risk that they will not meet their pension obligations. Effective management of reserve funds has also proved important, as a record of poor fund performance has led some countries to put reserve funds under the administration of a relatively independent manager with the mandate to maximize returns and minimize avoidable risk. Establishing reserve funds ahead of demographic changes—well before the share of elderly in the population increases substantially—makes it more likely that enough assets will accumulate to help meet future pension obligations. In countries such as Sweden and Denmark, which have had long experience with partial prefunding of PAYG programs, important reserves have already built up. Combined with long-term policies aimed at ensuring sound public finances, raising employment rates, and adjusting pension program provisions, these resources are expected to make significant contributions to the long-term finances of public pension programs. For example, Denmark’s reserve fund, set up in 1964, had assets equivalent to about 25 percent of GDP in 2000. Sweden’s reserve fund, created in 1960, was around 24 percent at the end of 2003. Other countries that have recently created pension reserve funds for their pension programs have a shorter period in which to accumulate reserves before population aging starts straining public finances. In particular, the imminent retirement of the baby boom generation is likely to make it challenging to continue channeling a substantial amount of resources to these funds. France, for example, relies primarily on social security surpluses to finance its pension reserve fund set up in 1999. Given its demographic trends, however, it may be unable to do this beyond the next few years. Similarly, Belgium and the Netherlands plan on maintaining budget surpluses, reducing public debt and the interest payments associated with the debt, and transferring these earmarked resources to their reserve funds. However, maintaining a surplus will require sustained budgetary discipline as a growing number of retirees begins putting pressure on public finances. Some countries have set specific starting dates for drawing down national funds to resist demands for their immediate use. Though the Irish National Pensions Reserve Fund was established in 2001, as of 2004 it had already amassed substantial assets, nearly 10 percent of GDP. Its resources are projected to support the financial sustainability of the pension program for two main reasons. First, Ireland enjoys relatively favorable demographics. Its aging problem is expected to increase in severity at a later date than those of other Western European countries; thus, it has in effect created its pension fund relatively early, with more time for returns to accumulate. Second, Ireland provides somewhat less generous public pensions to their beneficiaries than other OECD countries; its pension spending is, therefore, relatively low. Examples from several countries reveal that prefunding with national pension reserve funds is less likely to be effective in helping assure that national pension programs are financially sustainable if these funds are also used for purposes other than supporting the PAYG program. Some countries have used funds to pursue industrial, economic, or social objectives. For example, Japan used its reserve fund to support infrastructure projects, provide housing and education loans, and subsidize small and medium enterprises. As a result, Japan compromised to some extent the principal goal of public prefunding, which is to save in advance and accumulate assets so as to continue providing adequate benefits to retirees while keeping contribution rates of workers stable. Japan has since implemented a series of reforms. The latest wave, which became effective in 2001, refocused the fund’s objective in the interests of participants, rather than those of the general public. Measures introduced include management improvements and more aggressive investment strategies with the aim of maximizing returns. Past experiences have also highlighted the need to mitigate certain risks that pension reserve funds face, in addition to the risks that are inherent in investment of any pension funds. One kind of risk has to do with the fact that asset buildup in a fund may lead to competing pressures for tax cuts and spending increases, especially when a fund is integrated into the national budget. For instance, governments may view fund resources as a ready source of credit. As a result, they may be inclined to spend more than they would otherwise, potentially undermining the purpose of prefunding. For example, according to many observers, the United States’ Social Security trust fund, which is included in the unified budget and invested solely in U.S. Treasury securities that cannot be bought or sold in the open market, may have facilitated larger federal budget deficits. Ireland sought to alleviate the risk that its reserve fund could raise government consumption by prohibiting investment of fund assets in Irish government bonds. Some economists argue in favor of similar limits on the share of domestic government bonds the fund portfolio can hold. Additionally, pension reserve fund investments in private securities can have negative effects on corporate governance. There is the danger that if the government owns a significant percentage of the stocks of individual companies and as a result controls their corporate affairs, the best interest of shareholders may not be upheld because of potential conflicts of interest. Limiting the government’s stock voting rights by investing national pension resources in broad index funds may provide a safeguard against this type of risk. Another risk is that groups may exert pressure to constrain fund managers’ investment choices, potentially lowering returns. For example, Canada and Japan have requirements to invest a minimum share of their fund portfolios in domestic assets, restricting holdings of foreign assets to stimulate economic development at home. In contrast, Norway chose to invest its fund reserves almost exclusively in foreign assets. The funds of Ireland and New Zealand also have large shares of foreign investments. Investing a significant share of reserves in foreign assets may not be a realistic or viable option for large economies with mature financial markets, such as the United States. Funds in several countries have also faced pressure to adopt ethical rather than purely commercial investment criteria, with a possibly negative impact on returns. In recent years, some countries have taken steps to help ensure that funds are managed to maximize returns and minimize avoidable risk. Canada, for example, has put its fund under the control of an Investment Board operating independently from the government since the late 1990s. Several countries, including New Zealand, have taken steps to provide regular reports and more complete disclosures concerning pension reserve funds which may help achieve transparency in management and administration and contribute to public education and oversight. (For additional information concerning national pension reserve programs, see fig. 1 and table 6 in app. II.) Countries that have adopted individual account reforms—which may also help prefund future retirement income—offer lessons about financing the existing PAYG pension program as the accounts are established. To manage this transition period these countries have expanded public debt, built up budget surpluses in advance of implementation, reduced or eliminated the PAYG program, or used some combination of these approaches. Another important consideration for countries that have individual account programs is how to balance achieving high rates of return while ensuring individuals receive an adequate level of benefits. Measures such as limits on how the funds are invested and the level of fees and charges may help to ensure that benefits will be adequate, but should not be so restrictive that they unduly harm individuals or pension fund managers. In addition, administering individual accounts requires effective regulation and supervision of the financial industry to protect individuals from avoidable investment risks. Educating the public is also important as national pension systems become more complex. The experiences of other countries demonstrate the importance of considering how individual accounts may affect the long-term and short- term financing of the national pension system and the economy as a whole. In the long-term, individual accounts can contribute to sustainability by providing a mechanism to prefund retirement benefits that could be less subject to demographic booms and busts than a PAYG approach. Individual accounts prefund benefits in private accounts rather than government accounts. If governments are unable to save through national pension reserves, private accounts may facilitate pre-funding that would not occur otherwise.If, however, such accounts are funded through borrowing, no such prefunding is achieved. In the short-term countries adopting individual accounts face the common challenge of how to pay for both a new funded pension and an existing PAYG pension simultaneously. The cost of the transition from a PAYG program to individual accounts depends on whether the individual accounts redirect revenue from the existing PAYG program, the amount of revenue redirected, and how liabilities under the existing PAYG program are treated. The countries we studied vary in the amount of revenue diverted from their PAYG programs to fund their individual accounts, resulting in a range of related transition costs. Australia and Switzerland used new sources of funding to add individual accounts to their existing, relatively modest, national pension systems. Transition costs were not an issue, because no resources were diverted from paying current benefits. Nonetheless, new financing was needed to both fund the new program and to support the existing program. (For additional information concerning these “add-on” programs and other countries’ individual account programs, see table 7 in app. II.) Some countries diverted revenue from the existing PAYG program to the individual accounts, a “carve out,” resulting in shortfalls that reflect, in part, the portion of the PAYG program being replaced with individual accounts. For example, transition costs may be less in countries such as Sweden where the contribution to individual accounts is 2.5 percent of covered earnings, than for Poland or Hungary, which have contribution rates of 7.3 percent and 8 percent, respectively. In addition to the level of transition costs resulting from redirecting PAYG revenue, how a country manages these costs also affects the success of the reform. All of the countries we reviewed also made changes that were meant to help finance the transition to individual accounts, such as increasing contributions to or decreasing benefits from their PAYG programs. In addition, Chile built a budget surplus in anticipation of major pension reform, and Sweden had large budget surpluses in place prior to establishing individual accounts. Some countries transferred funds from general budget revenues to help pay benefits to current and near retirees, expanding public borrowing. Where they financed individual accounts through borrowing, these countries will not positively affect national saving until the debt is repaid, as contributions to individual accounts are offset by increased public debt. For example, Poland’s debt is expected to exceed 60 percent of GDP in the next few years, in part because of its public borrowing to pay for the movement to individual accounts. Countries sometimes had difficulty predicting their transition costs. In particular, countries that allowed workers to opt in or out of individual account programs had difficulty estimating costs. For example, more workers in the United Kingdom, Hungary, and Poland responded to incentives to contribute to individual accounts than originally anticipated, leaving the existing PAYG programs with less funding than planned. Hungary’s short-run fiscal concerns resulted in a slower increase in contribution rates to individual accounts than originally planned. Regardless of whether workers have a choice of participating in the program, individual accounts may also affect the long-term costs to the government. For example, if income from substitute accounts leaves particular individuals with less retirement income than if they had not participated, some may qualify for benefits from other means-tested programs. On the other hand, to the extent that the accounts increase retirement incomes, costs for such programs may fall. Under a voluntary approach, such effects could depend partly on the rate of participation. The actual effect of countries’ individual account programs on other programs as they relate to government spending will not be clear for years to come, when cohorts of affected workers retire. Countries adopting individual accounts as part of their national pension systems have had to make trade-offs between giving workers the opportunity to maximize expected returns in their accounts and helping assure that benefits will be adequate for all participants. Some countries set a guaranteed rate of return to reduce certain investment risks and help ensure adequacy of benefits. Guaranteed rates of return may be relative, that is, related to other funds’ returns, as in Chile, or fixed—a guaranteed percentage rate return, as in Switzerland. In Chile workers with individual accounts are guaranteed a minimum rate of return set at 2 percentage points below the average return for funds of the same type during a 3-year period. In Switzerland, account holders were assured a minimum rate of return of 2.25 percent in 2004. This type of guarantee may, however, result in limited investment diversification or conservative investment decisions, resulting in lower rates of return overall. In Chile, for example, the guaranteed return may have resulted in a “herding” effect, creating an incentive for fund managers to hold similar portfolios and reducing variation in returns. To help ensure that individuals receive at least a benefit based on the guaranteed rate of return, several countries require fund managers to have reserve funds to pay benefits at the guaranteed return level. A number of these countries have further provisions that the government will provide benefits if all of the fund reserves are used. Another measure to ensure retirees will have at least a minimally adequate level of income is to provide some form of minimum guaranteed benefit. All countries with individual accounts that we reviewed provide such a benefit. This can be increasingly important as individuals assume risks with the investment of funds in their individual accounts. Some experts believe that a minimum pension guarantee could encourage investors to select riskier investments or spend their assets more quickly. For example, in countries with a large flat-rate pension, individuals may make risky investment decisions because they can rely on the guarantee if their risk taking brings poor results. In countries where additional benefits are added on to the individual account payment to meet a minimum standard (“top-up” benefits) individuals may minimize their voluntary contributions in order to receive a higher benefit from the government. There is some belief that this may occur in Chile, where low-income workers might try to stop making contributions after meeting the contribution year requirement. Individuals in countries with a means-tested benefit may spend down their retirement assets quickly to qualify for the benefit. This has occurred in Australia, and as result, that country plans to increase the age when individuals can access their individual account funds from 55 to 60 between 2015 and 2025. In any of these cases, the government could incur increased costs because it ensures that individuals have at least a certain level of income. The financial risk to the government will be greater in countries that have a larger guarantee. However, the protection of individuals against poverty could also be greater in these countries. Outside of providing a minimum pension guarantee, countries have taken additional measures to help ensure an adequate retirement income. To prevent fees from eroding small account balances, some of these countries also exclude low-income workers from participation requirements in the individual account program. Another approach to help protect low-income workers occurs in Mexico, where the federal government provides a flat- rate contribution on behalf of workers. It is important to consider the payout options available from individual accounts, as these can also affect income adequacy throughout retirement. For example, an annuity payout option can help to ensure that individuals will not outlive their assets in retirement. However, purchasing an annuity can leave some people worse off if, for example, premiums are high or inflation erodes the purchasing power of benefits. Several countries also allow for phased withdrawals, sometimes with restrictions, helping to mitigate the risk of individuals outliving their assets and becoming dependent on the government’s basic or safety net pension. Some countries offer a lump-sum payment under certain circumstances. For example, Chile and Mexico allow lump sums for persons who have account balances above a certain amount. Australia allows a full lump- sum payout for all retirees age 55 and above (age 60 and above by 2025). Countries also protect individuals by regulating how the funds in their accounts can be invested. Initially, several countries offered individuals choices among a limited number of investment funds and often restricted the portion of assets that could be invested in certain products, such as publicly traded equities, private equities, and foreign securities. Later, however, the options were expanded in most countries to allow more investment diversification, but they still include some restrictions. Additionally, as investment options have expanded, some countries have incorporated other protections. For example, Chile and Mexico have incorporated investment options that take into account individuals’ ages and risk tolerance. Chile requires each pension administrator to offer four types of funds with varying degrees of risk, including a higher risk fund and a fund invested in fixed-rate instruments. Pension administrators may offer a fifth higher risk fund, available to workers more than 10 years from the age of retirement. Mexico recently allowed pension fund managers to offer more than one investment fund and included options to help provide workers with an adequate rate of return at acceptable risk. Sweden limits individuals to selecting at most five funds from among all the qualified investment funds that choose to participate—over 650 funds as of 2004. Some experts have suggested that having such a large number of funds available may discourage active choice. About two-thirds of participants made an active investment choice in 2000. Since 2001, however, about 85 percent of new entrants have left their money in the default fund—a separate fund for those who do not wish to make a fund choice. This default option can be an important safeguard. However, depending on who makes the default decisions, it may be open to some of the same issues as pension fund reserves, such as political pressures for certain investment criteria in order to meet other social objectives. To further protect individuals, most of the countries with individual accounts have some sort of limit on the fees that fund managers can charge. Nonetheless, it is unclear how these restrictions may affect an individual’s account balance and returns. Chile allows funds to charge fees on new accounts, and individual account contributions, and for phased withdrawals of funds during retirement. In addition to imposing this type of limit, Poland has a ceiling on the amount of some types of charges. Sweden has variable ceilings on charges, and the United Kingdom has a fixed ceiling charge on its stakeholder pension. Sweden uses a formula to calculate the size of fees that are permitted to help ensure that fees are not too high. Additionally, it plans to spread certain fixed costs over the first 15 years of the program, helping avoid high fees in the early years. Limits on the level of fees can also affect fund managers. In the United Kingdom, for example, regulations capping fees may have discouraged some providers from offering pension funds. Countries have also taken steps to lower administrative costs that contribute to the fees participants are charged. For example, regulations regarding how often individuals are permitted to move assets from one investment fund to another can also protect program participants by helping contain program costs that arise when people switch funds frequently. Many countries restrict the number of times an individual can switch. Mexico reportedly has lower administrative costs than some other Latin American countries, in part because it limits individuals to annual switching. Chile permitted people to switch fund managers three times a year, but later restricted switching to two times a year to help lower costs. Poland provides an incentive for individuals to stay with a pension fund manager for at least 2 years by requiring fund managers to charge lower fees for these contributors. Sweden does not restrict the number of times individuals can change their investments. To help keep costs low, however, Sweden aggregates individuals’ transactions to realize economies of scale. Some countries’ experiences highlighted weaknesses in regulations on how pension funds can market to individuals. Poland’s and the United Kingdom’s regulations did not prevent problems in marketing and sales. Poland experienced sales problems, in part, because it had inadequate training and standards for its sales agents, which may have contributed to agents’ use of questionable practices to sign up individuals. The United Kingdom had a widely-publicized “mis-selling” scandal that resulted in over a million investors opening individual accounts when they would more likely have been better off retaining their occupation-based pensions. Insurance companies were ordered to pay roughly $20 billion in compensation. In contrast, Sweden protects individuals from excessive marketing by not allowing pension funds access to information about individuals’ investments. Instead, funds rely on mass advertising and provide reports and disclosures to investors through a clearinghouse. Countries’ individual account experiences reveal pitfalls to be avoided during implementation. For example, Hungary, Poland, and Sweden had difficulty getting their data management systems to run properly and continue to experience a substantial lag time in recording contributions to individuals’ accounts. Sweden purchased a new computer system after the program it intended to use proved insufficient for managing individual accounts, resulting in an unexpected cost of $25 million. Once a record- keeping system is in place, however, problems may persist. For example, Poland had some difficulty matching contributions with contributors because it allowed two different identification numbers to be used for reporting purposes. In such cases, workers’ contributions were not being credited to their accounts. Additionally, Poland experienced problems with its computer system that resulted in a backlog. The government was required to make interest payments to funds for delays in contribution transfers. According to a report from the International Labour Organization (ILO), the government initially failed to make 95 percent of the transfers to private funds and as of 2002 was still unable to make 20 percent to 30 percent of required monthly transfers. In countries where workers have a choice of whether to participate in the individual account program, it is important that policymakers make timely decisions about other details concerning the administration and implementation of the program, so that workers can make informed choices. Hungary and Poland reportedly implemented their individual account systems without having made such decisions, including those concerning annuities. Both countries required annuity payouts, but the markets did not have the appropriate type of annuity available. For example, inflation-adjusted and gender-neutral annuities were not available in Hungary. Experts suggest that while these decisions may not have seemed important initially, the lack of information could make it difficult for workers to decide whether to participate in the individual account program and to assess their potential retirement security. Not only is information important to help workers make initial decisions about participation in an individual account program, but it should be provided on an ongoing basis. It is also of increasing importance as national pension systems become more complex. Several countries require disclosure statements about the status of a pension fund. The inclusion of fees charged on these disclosure statements could help individuals to make more informed decisions when choosing a fund manager. Some countries have done a better job than others of providing fund performance information. For example, Australia requires its fund providers to inform members through annual reports clearly detailing benefits, fees and charges, investment strategy, and the fund’s financial position. In contrast, Hungary reportedly did not have clear rules for disclosing operating costs and returns, making it hard to compare funds’ performances. Other more general information about individual account savings is also important. In the United Kingdom, individuals must decide whether they should participate in the state earnings-related pension program, their employer-sponsored pension plan, or an individual account. To help individuals make this decision, the Financial Services Agency publishes decision trees on its Web site. Decision trees in the United Kingdom ask basic questions about pension arrangements to help individuals make their own choices. Individuals may find that these are somewhat complicated, however, in part because the United Kingdom’s system is complex. In Mexico, a government entity provides information to workers on the mandatory pension system and includes information about the importance of reviewing commissions and returns when making a pension fund choice. While countries have made efforts to inform the public about the individual account program and the different options they will have available, little research on the effectiveness of these campaigns has been conducted. There has been research, however, looking at the overall financial literacy of individuals across many OECD countries. The OECD recently conducted a study on financial literacy and found that most respondents to financial literacy surveys in member countries have a very low level of knowledge concerning finances, often seeming to think that they know more about financial issues than they really do. For example, about two-thirds of Australian respondents to a survey indicated that they understand the concept of compound interest. However, only 28 percent correctly answered a question using the concept. Countries have realized the growing need for more financial literacy, and several countries provide or are planning to provide general information about pensions and savings for retirement. Demographic challenges and fiscal pressure have necessitated national pension reform in many countries. Though the reform efforts we examined all had the common goal of improving financial sustainability, countries adopted different approaches depending on their existing national pension systems and the prevailing economic and political conditions. That is why reforms in one country are not easily replicated in another, or if they are, may not lead to the same outcome. Countries have different emphases, such as benefit adequacy or equity; as a result, what is perceived to be successful in one place may not be viewed as a viable option somewhere else. Although some pension reforms were undertaken too recently to provide clear evidence of results, the experiences of other developed countries do suggest some lessons for U.S. deliberations on Social Security’s future. Some of these lessons are common to all types of national pension reform and are consistent with findings in previous GAO studies. Restoring long- term financial balance invariably involves reducing projected benefits, raising projected revenues, or both. Additionally, with early reform, policymakers are more likely to avoid the need for more costly and difficult changes later. Countries that undertook important national pension reform well before undergoing major demographic changes have achieved or are close to achieving, financially sustainable national pension systems. Others are likely to need more significant steps because their populations are already aging. No matter what type of reform is undertaken, the sustainability of a pension system will depend, in large part, on the long-term health of the national economy. As the number of working people for each retiree declines, average output per worker would have to increase in order to sustain average standards of living. Reforms that encourage employment and saving, offer incentives to postpone retirement, and promote growth are more likely to produce a pension system that delivers adequate retirement income and is financially sound for the long term. Regardless of a country’s approach, its institutions need to effectively operate and supervise the different aspects of reform. A government’s capacity to implement and administer the publicly managed elements of reform and its ability to regulate and oversee the privately managed components are crucial. Good public understanding of pension issues is needed to provide reasonable assurance that people plan ahead to have adequate income in retirement and to help ensure that pension reforms have enough public support to be sustainable. In addition, education of the public becomes increasingly important as workers and retirees face more choices and the national pension system becomes more complex. This is particularly true in the case of individual account reforms, which require high levels of financial literacy and personal responsibility. In nearly every country we studied, debate continues about alternatives for additional reform measures. It is clearly not a process that ends with one reform and often requires more than one type of reform. This may in part be true because success can only be measured over the long term, but problems may arise and need to be dealt with in the short term. The positive lessons from other countries’ reforms may only truly be clear in years to come. We provided a draft of this report to the Social Security Administration, the State Department, and the Department of the Treasury. SSA and Treasury provided technical comments on the draft; the State Department did not provide comments. We also provided copies of the draft to OECD staff and other external reviewers, who provided technical comments. In response to these technical comments, we modified the draft where appropriate. We are sending copies of this report to the Commissioner of Social Security, the Secretary of State, and the Secretary of the Treasury. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov/. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or bovbjergb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. We reviewed national pension reforms that occurred since 1970 in all 30 Organisation for Economic Co-operation and Development (OECD) countries and Chile. We included Chile in our study because it was the first country to undertake national pension reform that resulted in individual accounts. On the basis of our preliminary research, we identified three types of reform—adjustments to the existing pay-as-you-go (PAYG) program, national pension reserve funds, and individual account programs—that illustrate a variety of circumstances and experiences with national pension reform across these countries. While we reviewed each type of reform separately for this report, we do indicate when countries have undergone more than one of these types of reform. We did not conduct an evaluation or audit of any country’s national pension program or its reform efforts; rather, we relied on the work of officials in individual countries and international organizations with expertise in this area. We did, however, draw some lessons based on our review, as well as including the lessons that others have drawn. We attempted to report the most current status of a country’s reform by using the most recently available data. Some countries may have undergone changes to their systems subsequent to the publication of the literature we reviewed, however. In many countries reforms are implemented over an extended period of time, so the results are yet to be apparent. We also contacted supreme audit institutions or reviewed their Web sites to see if they have done similar work. However, much of their work was of the audit nature and not relevant for our study. To obtain information on other countries’ national pension reforms, we reviewed the types of reforms undertaken in OECD countries and Chile. We selected the OECD countries in part because they are most comparable to the United States. Additionally, the OECD has relatively comparable data for its member countries. We conducted background research and interviews to identify the types of reforms, if any, the selected countries had undertaken. We primarily used information from the following sources to identify countries’ reforms and characteristics of national pension systems: the Social Security Programs Throughout the World publications, provided through a cooperative effort by the Social Security Administration and the International Social Security Association; publications from the International Social Security Association and updates from its Social Security Worldwide database; and publications from OECD, World Bank, International Monetary Fund, and the European Union’s Economic Policy Committee. After identifying the countries to be reviewed and the types of reform they had undertaken, we conducted a review of relevant literature on these countries’ national pension reforms, including the following sources: OECD publications on national pension reform and other related issues; World Bank’s Pension Reform Primer; Relevant government agency publications and Web sites from selected Reports from U.S. and international policy groups; and Reports from U.S., international, and country-specific experts. We also interviewed officials and interest group representatives in Washington, D.C.; Paris; and London. We met with pension experts and country specialists at the OECD, the World Bank, and French and British experts, officials, and interest group representatives, as well as international pension experts in the United States. We formulated the lessons learned in our report from those identified by experts and officials and based on our own analysis of countries’ reforms. Generally, we aligned our lessons with GAO’s criteria for evaluating national pension reform, identifying key lessons related to fiscal sustainability, adequacy and equity, and implementation and administration of reform. We relied mainly on OECD data for information on country demographics, economics, and information related to the national pension programs. OECD collects much of its data from its member countries and validates its reports with these countries. For example, OECD recently published a description of each OECD member country’s mandatory pension system, including the results of modeling that projects the net replacement rates expected from old-age pension benefits once all reforms enacted through 2002 have been fully implemented. OECD has also undertaken studies of the projected level of public spending on national old-age pension programs through 2050 based on national estimates and common OECD economic assumptions. In cases where national governments have completed more recent estimates, we cited those rather than the earlier OECD estimates. Also, we do not make a link between specific national pension reforms and changes in the economy or of any specific reform measure and the sustainability of the program. This is because most countries have undergone more than one type of reform at different points in time, making causes and effects difficult to determine. To assess the reliability of the data on countries’ national pension systems, we (1) interviewed officials at the OECD including those in the Statistics Directorate and the Economics Department responsible for compiling these data based on information provided by government officials in OECD member countries, and (2) performed some basic reasonableness checks of the data against other sources of information. We determined that the data are sufficiently reliable for the purpose of making broad comparisons of the United States’ and other countries’ pension systems. To ensure the reliability of its data, OECD also compares and investigates alternative sources of data, uses an internal peer and supervisory review process, and a process through which draft reports are reviewed and validated by member governments prior to publication. Nonetheless, OECD officials note several limitations in the data, including the fact that the data are largely self-reported by each country and are affected by differences in exchange rates, methods for analyzing national account data and tracking price inflation, as well as different methods used to predict longevity and economic growth. OECD works to develop comparable data by, for example, developing purchasing power parity factors, harmonized price indexes, and projections of old-age pension spending based on common economic assumptions. Because of, these limitations, we were unable to determine the reliability and precision of estimates for each country. We conducted our review from August 2004 through September 2005 in accordance with generally accepted government auditing standards. Below are tabular data concerning OECD countries and Chile. Table 3 provides background information concerning each country’s demographics, economy, and political structure. Table 4 provides basic information about each country’s national pension system, including information about spending on mandatory old-age pension programs, contribution rates, the extent to which mandatory pensions replace workers’ earnings, and the size of voluntary supplementary private and occupational pensions. Table 5 provides examples of various adjustments to PAYG pension programs. Table 6 provides information on national pension reserve funds for countries that have established such funds. Table 7 provides information on individual account programs that countries have adopted as part of their mandatory national pension systems. Table 5 provides examples of adjustments to national PAYG pension programs undertaken by OECD countries and Chile. The table primarily includes examples of reforms that increased contributions to the programs or decreased benefits. It does not provide a comprehensive list of such reforms. In addition to the contact named above, Alicia Puente Cackley, Assistant Director; Benjamin P. Pfeiffer; Joseph Applebaum; Thomas A. Moscovitch; Nhi Nguyen; Nyree M. Ryder; Roger Thomas; Seyda G. Wentworth; Corinna A. Nicolaou; Lise Levie; and Pat Elston made key contributions to this report. Individual accounts supplement Social Security benefits and would draw contributions from new revenue streams. (See Income adequacy.) An insurance product that provides a stream of payments for a pre- established amount of time in return for a premium payment—the amount being converted into any annuity. For example, a life annuity provides payments for as long as the annuitant lives. Only insurance companies can underwrite life annuities in the United States. Other financial intermediaries, such as banks and stock brokerage firms, may sell annuities issued by insurance companies. Cohort of people born after World War II. This includes Americans born from 1946 through 1964; 76 million strong, they represent the longest sustained population growth in U.S. history. Other countries generally use the term “baby boomers” to describe this generation. Individual accounts that would result in some reduction of or offset to Social Security benefits because contributions to those accounts would draw on existing Social Security revenues. A measure of the change over time in the prices, inclusive of sales and excise taxes, paid by urban households for a representative market basket of consumer goods and services. The CPI is prepared by the U. S. Department of Labor and used to compute Social Security cost of living adjustment (COLA) increases. A worker in covered employment, that is, a job through which the worker has made contributions to Social Security. The amount by which the government’s spending exceeds its revenues in a given period, usually a fiscal year. The federal deficit is the shortfall created when the federal government spends more in a fiscal year than it receives in revenues. To cover the shortfall, the government sells bonds to the public. A type of retirement plan that guarantees a specified retirement payment and in which the plan’s sponsor assumes the risk of providing these benefits. Defined benefit plans promise their participants a steady lifetime retirement income, generally based on years of service, age at retirement, and salary averaged over some number of years. Defined benefit plans express benefits as an annuity but may offer departing participants the opportunity to receive lump-sum distributions. Defined benefit plans are one of two basic types of employer-sponsored pension plans. A type of retirement plan that establishes individual accounts for employees to which the employer, participants, or both make periodic contributions. Defined contribution plan benefits are based on employer and participant contributions to and investment returns (gains and losses) on the individual accounts. Employees bear the investment risk and often control, at least in part, how their individual account assets are invested. An estimate of the number of dependents per worker, generally defined as the ratio of the elderly (ages 65 and older) and/or the young (under age 15) to the population in the working ages (ages 15-64) or to the projected size of the labor force. A person who is eligible for benefits or care because of his or her relationship to an individual. Under the Social Security Act, “dependent” means the same as it does for federal income tax purposes; i.e., someone for whom the individual is entitled to take a deduction on his personal income tax return, generally an individual supported by a tax filer for over half of a calendar year. The age at which individuals qualify for reduced retirement benefits if they choose to collect benefits before the normal retirement age; the current early retirement age for Social Security is 62. Individuals who choose to take retirement benefits early will have their monthly benefits permanently reduced, based on the number of months they receive checks before they reach full retirement age. (Also called normal or statutory retirement age.) The age at which individuals qualify for full, or unreduced, retirement benefits from Social Security and employer-sponsored pension plans. The normal retirement age for Social Security was 65 for many years. For workers and spouses born in 1938 or later and widows/widowers born in 1940 or later, the normal retirement age increases gradually from age 65 until it reaches age 67 in the year 2022. Among OECD countries, based on full implementation of laws enacted as of 2002, the retirement age ranges from 60 (in France and Korea) to 67 (in Iceland, Norway, and the United States). A pension system that is fully funded is one in which sufficient contributions have been put aside so that assets accumulated to date are equal to the value of benefits accrued to date. Defined contribution pensions and individual retirement accounts are fully funded by definition. Funds moved from the General Fund of the Treasury to other programs that are usually funded with earmarked revenue, sometimes to maintain the solvency of those programs. General funds, constituting about two- thirds of the budget, have no direct link between how they are raised and how they are spent. General fund receipts include income and excise taxes. A commonly used measure of domestic national income. GDP measures the market value of output of final goods and services produced within a country’s territory, regardless of the ownership of the factors of production involved, i.e., local or foreign, during a given time period, usually a year. Earnings from capital invested abroad (mostly interest and dividend receipts) are not counted, while earnings on capital owned by foreigners but located in the country in question are included. GDP may be expressed in terms of product—consumption, investment, government purchases of goods and services, and net exports—or it may be expressed in terms of income earned-wages, interest, and profits. It is a rough indicator of the economic earnings base from which government draws its revenues. Helping workers maintain living standards during retirement by replacing income from work at an adequate level and to prevent destitution in old age. The U.S. Congress expected that Social Security benefits would eventually provide more than a “minimal subsistence” in retirement for full-time, full-career workers. Various measures help examine different aspects of this concept, but no single measure can provide a complete picture. Such measures include poverty rates, replacement rates, and the proportion of the population that depends on others for income support. (See Price indexation, Wage indexation.) These are fully funded accounts that are administered by either employers, the government, or designated third parties and are owned by the individual. The level of retirement benefits depends largely on the amount of contributions made by, or on behalf of, an individual into the account during his or her working life, investment earnings, and the amount of fees the individual is required to pay. The relationship of benefits to contributions—for example, implicit rates of return on Social Security contributions or money’s worth ratios. Total saving by all sectors of the economy: personal saving, business saving (corporate after-tax profits not paid as dividends), and government saving (the budget surplus or deficit—indicating dissaving—of all government entities). National saving represents all income not consumed, publicly or privately, during a given period. Net national saving is gross national saving less consumption of fixed capital (depreciation). PAYG pension programs in which "notional" accounts track both incoming contributions and investment earnings, but these exist only on the books of the managing institution. At retirement, the accumulated notional capital in each account is converted to a stream of pension payments using a formula based on factors such as life expectancy at the time of retirement. The two U.S. Social Security programs—Old-Age and Survivors Insurance (OASI) and Disability Insurance (DI)—that provide monthly cash benefits to beneficiaries and their dependents when the beneficiaries retire, to beneficiaries’ surviving dependents, and to disabled worker beneficiaries and their dependents. Pay-as-you-go (PAYG:) System of financing in which contributions that workers and/or employers make in a given year are used to fund the payments to beneficiaries in that same year, and the system’s trust funds are kept to a relatively small contingency reserve. Tax imposed on some or all of workers’ earnings that can be imposed on employers, employees, or both. In the United States, payroll taxes are used to finance the Social Security and Medicare programs. Employers and employees each pay Social Security taxes equal to 6.2 percent of all employee earnings up to a cap and pay Medicare taxes of 1.45 percent, with no cap. Payroll taxes are also known as FICA (Federal Insurance Contributions Act) taxes or SECA (Self-Employment Contributions Act), if the taxpayer is self-employed. All OECD countries except New Zealand levy payroll taxes to support their pension programs, though the rates and the shares borne by employers and employees vary, as do the minimum and maximum level of earnings subject to the tax and the kinds of programs funded. A method by which benefits are adjusted at periodic intervals by a factor derived from an index of prices; some Social Security reform proposals in the United States would price-index earnings to compute benefits, instead of using wage indexing. Over time, increases in wages have been greater and are expected to continue to be greater than increases in prices. Indexing earnings to prices instead of wages would, therefore, reduce the average lifetime earnings used in the formula, which, in turn, would reduce benefits. Usually expressed annually, the rate of return is the gain or loss generated from an investment, expressed as a percentage of the value at the time of the initial investment. The ratio of retirement benefits (from Social Security or employer- sponsored plans) to preretirement earnings. Analysts often compare current benefits with a recipient’s previous wages to judge the adequacy of Social Security payments. Under a social insurance program, the society as a whole insures its members against various risks they all face, and members pay for that insurance at least in part through contributions to the system. Social insurance programs, including Social Security, are designed to achieve certain social goals. The federal agency that administers all Social Security related programs, including the Supplemental Security Income (SSI) and the Disability Insurance (DI) programs. For Social Security, a condition of financial viability in which the program can meet its full financial obligations as they come due. Specifically, the ability to pay full benefits using existing revenue sources and trust fund balances. When a program does not meet these conditions, it is said to be insolvent. For Social Security, sustainable solvency means the ability to pay benefits, based on current law projections of revenue and outlays, beyond Social Security’s Board of Trustees’ 75-year forecast and make Social Security permanently solvent. Also defined as having a stable and growing trust fund ratio with program revenues increasing faster than outlays at the end of the 75-year period. The European Union and OECD have examined the fiscal sustainability of national pension systems based in part on projections of the change in the percentage of countries’ GDP to be spent on old-age pensions from 2000 to 2050 under current law. Refers to the additional revenue required to implement substitute individual account plans. Under some individual account plans, portions of Social Security contributions would be diverted to the accounts. However, under Social Security’s pay-as-you-go financing, some of those contributions would also be needed to pay for current benefits. Making account deposits while also meeting current benefit costs requires additional revenue, which we refer to as transition costs. (Compare Price indexation.) A method by which benefits are adjusted at periodic intervals. Under its current formula, SSA uses the national average wage indexing series to index a person under age 60’s lifetime earnings when computing that person’s Social Security benefits. Earnings from age 62 to age 67 are adjusted using a price index.
Many countries, including the United States, are grappling with demographic change and its effect on their national pension systems. With rising longevity and declining birthrates, the number of workers for each retiree is falling in most developed countries, straining the finances of national pension programs, particularly where contributions from current workers fund payments to current beneficiaries--known as a pay-as-you-go (PAYG) system. Although demographic and economic challenges are less severe in the United States than in many other developed countries, projections show that the Social Security program faces a long-term financing problem. Because some countries have already undertaken national pension reform efforts to address demographic changes similar to those occurring in the United States, we may draw lessons from their experiences. The current and preceding Chairmen of the Subcommittee on Social Security of the House Committee on Ways and Means asked GAO to study lessons to be learned from other countries' experiences reforming national pension systems. GAO focused on (1) adjustments to existing PAYG national pension programs, (2) the creation or reform of national pension reserve funds to partially prefund PAYG pension programs, and (3) reforms involving the creation of individual accounts. We received technical comments from SSA, Treasury, the OECD, and other external reviewers. All countries in the Organisation for Economic Co-operation and Development (OECD), as well as Chile, have, to some extent, altered their national pension systems, consistent with their different economic and political conditions. While changes in one country may not be easily replicated in another, countries' experiences may nonetheless offer potentially valuable lessons for the United States. Countries' experiences adjusting PAYG national pension programs highlight the importance of considering how modifications will affect the program's financial sustainability, its distribution of benefits, and the incentives it creates. Also, how well new provisions are implemented, administered, and explained to the public may affect the outcome of the reform. Most of the countries GAO studied both increased contributions and reduced benefits, often by increasing retirement ages. Generally, countries included provisions to help ensure adequate benefits for lower-income groups, though these can lessen incentives to work and save for retirement. Countries with national pension reserve funds designed to partially pre-fund PAYG pension programs provide lessons about the importance of early action and sound governance. Some funds that have been in place for a long time provide significant reserves to strengthen the finances of national pension programs. Countries that insulate national reserve funds from being directed to meet nonretirement objectives are better equipped to fulfill future pension commitments. In addition, regular disclosure of fund performance supports sound management and administration and contributes to public education and oversight. Countries that have adopted individual account programs--which may also help prefund future retirement income--offer lessons about financing the existing PAYG pension program as the accounts are established. Countries that have funded individual accounts by directing revenue away from the PAYG program while continuing to pay benefits to PAYG program retirees have expanded public debt, built up budget surpluses in advance, cut back or eliminated the PAYG programs, or taken some combination of these approaches. Because no individual account program can entirely protect against investment risk, some countries have adopted individual accounts as a relatively small portion of their national pension system. Others set minimum rates of return or provide a minimum benefit, which may, however, limit investment diversification and individuals' returns. To mitigate high fees, which can erode small account balances, countries have for example capped fees or centralized the processing of transactions. Although countries have attempted to educate individuals about reforms and how their choices may affect them, studies in some countries indicate that many workers have limited knowledge about their retirement prospects.
The Disability Insurance and Supplemental Security Income programs are the nation’s largest providers of federal income assistance to disabled individuals, with SSA making payments of approximately $86 billion to about 10 million beneficiaries in 2002. The process through which SSA approves or denies disability benefits is complex and involves multiple partners at both the state and federal levels in determining a claimant’s eligibility. Within SSA, these include its 1,300 field offices, which serve as the initial point of contact for individuals applying for benefits, and the Office of Hearings and Appeals, which, at the request of claimants, reconsiders SSA’s decisions when benefits are denied. SSA also depends on 54 state Disability Determination Services (DDS) offices to help process claims under its disability insurance programs. State DDSs provide crucial support to the initial disability claims process— one that accounts for most of SSA’s workload—through their role in determining an individual’s medical eligibility for disability benefits. DDSs make decisions regarding disability claims in accordance with federal regulations and policies; the federal government reimburses 100 percent of all DDS costs in making disability determination decisions. Physicians and other members of the medical community support the DDSs by providing the medical evidence to evaluate disability claims. The process begins when individuals apply for disability benefits at an SSA field office, where determinations are made about whether they meet nonmedical criteria for eligibility. The field office then forwards the applications to the appropriate state DDS, where a disability examiner collects the necessary medical evidence to make the initial determination of whether the applicant meets the definition of disability. Once the applicant’s medical eligibility is determined, the DDS forwards this decision to SSA for final processing. Claimants who are initially denied benefits can ask to have the DDS reconsider its denial. If the decision remains unfavorable, the claimant can request a hearing before a federal administrative law judge at an SSA hearings office, and, if still dissatisfied, can request a review by SSA’s Appeals Council. Upon exhausting these administrative remedies, the individual may file a complaint in federal district court. Each level of appeal, if undertaken, involves multi-step procedures for the collection of evidence, information review, and decision making. Many individuals who appeal SSA’s initial decision will wait a year or longer—perhaps up to 3 years—for a final decision. To address concerns regarding the program’s efficiency, in 1992 SSA initiated a plan to redesign the disability claims process, emphasizing the use of automation to achieve an electronic (paperless) processing capability. The automation project started in 1992 as the Modernized Disability System, and was redesignated the Reengineered Disability System (RDS) in 1994. RDS was to automate the entire disability claims process—from the initial claims intake in the field office to the gathering and evaluation of medical evidence at the state DDSs, to payment execution in the field office or processing center, and including the handling of appeals at the hearings offices. However, our prior work noted that SSA had encountered problems with RDS during its initial pilot testing. For example, systems officials had stated that, using RDS, the reported productivity of claims representatives in the SSA field offices dropped. They noted that before the installation of RDS, each field office claims representative processed approximately five case interviews per day. After RDS was installed, each claims representative could process only about three cases per day. As a result, following an evaluation by a contractor, SSA suspended RDS in 1999 after approximately 7 years and more than $71 million reportedly spent on the initiative. In August 2000 SSA issued a management plan with a renewed call for developing an electronic disability system by the end of 2005. The strategy was to incorporate three components: an electronic disability intake process that would include (1) a subset of the existing RDS software, (2) the existing DDS claims process, and (3) a new system for the Office of Hearings and Appeals. The management plan also provided for several pilot projects to test the viability and performance of each project component. SSA’s work on this effort occurred through the spring of 2002, at which time the Commissioner announced that she had begun an accelerated initiative to more quickly automate the disability claims process. The agency anticipated that, with technologically advanced disability processing offices, it could potentially realize benefits of more than $1 billion, at an estimated cost of approximately $900 million, over the 10-year life of the accelerated initiative. In undertaking AeDib, SSA has embarked on a major initiative consisting of multiple projects that are intended to move all partners in its disability claims adjudication and review to an electronic business process. SSA envisions that AeDib will allow its disability components to stop relying on paper folders to process claims and to develop new business processes using legacy systems and information contained in an electronic folder to move and process all of its work. In so doing, SSA anticipates that AeDib will enable disability components to achieve processing efficiencies, improve data completeness, reduce keying errors, and save time and money. The AeDib strategy focuses on developing the capability for claimant information and large volumes of medical images, files, and other documents that are currently maintained in paper folders to be stored in electronic folders, and then accessed, viewed, and shared by the disability processing offices. SSA is undertaking five key projects to support the strategy: An Electronic Disability Collect System to provide the capability for SSA field offices to electronically capture information about the claimant’s disability and collect this structured data in an electronic folder for use by the disability processing offices; A Document Management Architecture that will provide a data repository and scanning and imaging capabilities to allow claimant information and medical evidence to be captured, stored, indexed, and shared electronically between the disability processing offices. Internet applications that will provide the capability to obtain disability claims and medical information from the public via the Internet. A DDS systems migration and electronic folder interface that will migrate and enhance the existing case processing systems to allow the state disability determination services offices to operate on a common platform and prepare their legacy systems to share information in the electronic folder; and A Case Processing and Management System for the Office of Hearings and Appeals that will interface with the electronic folder and enable its staff to track, manage, and complete case-related tasks electronically. According to SSA, the Electronic Disability Collect System and the Document Management Architecture are the two fundamental elements needed to achieve the electronic disability folder. By late January 2004, SSA plans to have developed these two components. It also expects to have completed five Internet disability applications, enhanced the DDS legacy systems, and developed the software that will allow existing SSA and DDS systems to interface with the electronic folder. However, SSA will not yet have implemented the scanning and imaging capabilities and the interface software to enable each disability processing office to access and use the data contained in the electronic folder. SSA officials explained that, at the end of next January, the agency plans to begin an 18-month rollout period, in which it will implement the scanning and imaging capabilities and establish the necessary interfaces. SSA has drafted but not yet finalized the implementation strategy for the rollout. SSA has performed several important project tasks since beginning the accelerated initiative in 2002. For example, it has implemented limited claims-intake functionality as part of the Electronic Disability Collect System, and begun additional upgrades of this software. In addition, it has developed two Internet applications for on-line forms to aid claimants in filing for disability benefits and services. Further, to support electronic disability processing, SSA is in the process of migrating and upgrading hardware and case processing software to allow all of the 54 state DDSs to operate on a common platform, and has begun developing software to enable the DDS systems to interface with the electronic folder. SSA has also performed some initial tasks for the Document Management Architecture, including developing a system prototype, establishing requirements for the scanning capability, and drafting a management plan and training strategy. Nonetheless, the agency still has a significant amount of work to accomplish to achieve the electronic disability folder by the end of next January. While substantial work remains for each of the AeDib components, primary among SSA’s outstanding tasks is completing the Document Management Architecture’s development, testing, and installation at the agency’s National Computer Center. Table 1 illustrates SSA’s progress through last June in accomplishing tasks included in the AeDib initiative, along with the many critical actions still required to develop and implement the electronic disability processing capability. As the table reflects, SSA’s electronic disability claims process hinges on accomplishing numerous critical tasks by the end of January 2004. In discussing the overall progress of the initiative, SSA officials in the Offices of Systems and Disability Programs acknowledge that the agency will be severely challenged to accomplish all of the tasks planned for completion by the end of January. Nonetheless, they believe that SSA will meet the targeted project completion dates, stating that the agency has conducted the necessary analyses to ensure that the accelerated schedule can accommodate the project’s scope. Beyond meeting an ambitious project implementation schedule, SSA must ensure that the system it delivers successfully meets key business and technical requirements for reliably exchanging data among disability processing components and is protected from errors and vulnerabilities that can disrupt service. Accomplishing this necessitates that SSA conduct complete and thorough testing to provide reasonable assurance that systems perform as intended. These include tests and evaluations of pilot projects to obtain data on a system’s functional performance and end-to- end tests to ensure that the interrelated systems will operate together effectively. In addition, the success of the system will depend on the agency identifying and mitigating critical project risks. SSA plans to rely on pilot tests and evaluations to help guide business and technical decisions about the electronic disability folder, including critical decisions regarding the document management technology. For example, SSA stated that the Document Management Architecture pilots will be used to test electronic folder interface requirements and DDS site configurations for AeDib national implementation. In addition, the pilots are expected to test the business process and work flow associated with incorporating the Document Management Architecture. SSA has stated that this information is crucial for determining whether the technology selected for the Document Management Architecture will adequately support the electronic folder. However, SSA may not be able to make timely and fully informed decisions about the system based on the pilot test results. The pilot tests were to begin this month, and some of the test results upon which decisions are to be based are not expected to be available until the end of December at the earliest, leaving little time to incorporate the results into the system that is to be implemented by late January. Further, even when completed, the pilot tests will provide only limited information about the electronic folder’s functionality. SSA stated that they will not test certain essential aspects of the folder usage, such as the DDS’s disability determination function. Thus, whether SSA will have timely and complete information needed to make decisions that are essential to developing and implementing the electronic disability folder is questionable. In addition, given the technological complexity of the AeDib project, the need for end-to-end testing is substantial. Our prior work has noted the need for such testing to ensure that interrelated systems that collectively support a core business area or function will work as intended in a true operational environment. End-to-end testing evaluates both the functionality and performance of all systems components, enhancing an organization’s ability to trust the system’s reliability. SSA’s development and use of new electronic tools to integrate an electronic folder with its own and DDS legacy systems, along with Web-based applications and the new Document Management Architecture, elevates the importance of ensuring that all parts will work together as intended. However, the agency currently has not completed a test and evaluation strategy to conduct end-to-end testing to demonstrate, before deployment, that these systems will operate together successfully. They added that conducting end-to-end testing would require delaying system implementation to allow the time needed for a claim to be tested as it moved through all of the disability components—a process that could take up to 6 months to complete. However, determining that all AeDib components can correctly process disability claims when integrated is vital to SSA’s knowing whether the electronic disability system can perform as intended. Compounding AeDib’s vulnerability is that SSA has not yet undertaken a comprehensive assessment of project risks to identify facts and circumstances that increase the probability of failing to meet project commitments, and taking steps to prevent this from occurring. Best practices and federal guidance advocate risk management. To be effective, risk management activities should be (1) based on documented policies and procedures and (2) executed according to a written plan that provides for identifying and prioritizing risks, developing and implementing appropriate risk mitigation strategies, and tracking and reporting on progress in implementing the strategies. By doing so, potential problems can be avoided before they manifest themselves into cost, schedule and performance shortfalls. SSA has developed a risk management plan to guide the identification and mitigation of risks, and based on that plan, has developed a high-level risk assessment of program and project risks. The high-level assessment, which SSA issued last February, identified 35 risks that the agency described as general in nature and addressing only overall program management issues related to the project’s costs, schedule, and hardware and software. For example, one of the high-level risks stated that the overall availability of the Document Management Architecture might not meet service-level commitments. The related mitigation strategy stated that the agency should continue to investigate various approaches to ensure the system’s availability. SSA has acknowledged the potential for greater risks given the electronic case processing and technological capability required for AeDib. Further, in response to our inquiries, its officials stated that the agency would conduct and document a comprehensive assessment of project risks by June 30 of this year. The officials added that AeDib project managers would be given ultimate responsibility for ensuring that appropriate risk-mitigation strategies existed and that SSA had tasked a contractor to work with the managers to identify specific risks associated with each system component. However, at this time, SSA is still without a comprehensive assessment of risks that could affect the project. Until it has a sound analysis and mitigation strategy for AeDib, SSA will not be in a position to cost-effectively plan for and prevent circumstances that could impede a successful project outcome. Integral to AeDib’s success are disability process stakeholders that SSA relies on to fulfill the program’s mission, including state disability determination officials and medical providers. As primary partners in the disability determination process, stakeholders can offer valuable and much-needed insight regarding existing work processes and information technology needs, and their stake and participation in the systems development initiative is essential for ensuring its acceptance and use. In assessing lessons learned from SSA’s earlier attempt to implement the failed Reengineered Disability System, Booz-Allen and Hamilton recommended that SSA at all times keep key stakeholders involved in its process to develop an electronic disability processing capability. SSA disability program and systems officials told us that the agency has involved its various stakeholders in developing AeDib. They stated that the agency has entered into memorandums of understanding for data sharing with state DDSs, established work groups comprising DDS representatives to obtain advice on development activities, and included these stakeholders in steering committee meetings to keep them informed of the project’s status. In addition, SSA stated, it has met with representatives of major medical professional associations to seek their support for SSA’s requests for releases of medical evidence. However, officials that we contacted in nine of the ten DDS offices stated that their concerns were not adequately heard and considered in the decision-making process for the development of AeDib, despite the critical and extensive role that states play in making disability determinations. Because of this limited involvement, the National Council of Disability Determination Directors, which represents the DDSs, stated that they were concerned that SSA may be pursuing an automated disability strategy that could negatively affect business operations by creating delays in the ability to make decisions on disability cases. The DDS representatives stated that SSA has not articulated a clear and cohesive vision of how the disability components will work to achieve the AeDib goal and that decisions about AeDib were being made without considering their perspectives. They explained, for example, that SSA’s decision to use a scanning and imaging vendor to whom medical providers would have to submit evidence would introduce an additional step into the disability process, and might result in DDSs’ not being able to effectively manage the critical information that they need to make disability determinations. Further, they have questions about how in the disability process evidence will be electronically stored, noting that SSA has proposed, but not yet decided among, three possible scenarios for establishing repositories to house medical evidence. Last March, the National Council of Disability Determination Directors made three suggestions to SSA aimed at allowing the DDSs to have greater responsibility for this aspect of the disability business process. Among their proposals was that DDSs (1) be allowed to manage the contractors who will be responsible for scanning and imaging all records received from medical providers; (2) have the choice of receiving electronic medical evidence at a repository maintained at their sites rather than at remote, centralized locations; and (3) be allowed to test the possibility of scanning records after, rather than before, the DDS adjudicates a claim. According to the council, this latter approach would ensure that the DDSs could make timely and accurate disability determinations, while also allowing SSA the time to perfect the electronic business process and transition to the initial case process. As of last week, however, SSA had not responded. For its part, SSA stated that it is reviewing, but has not yet taken a position on, the council’s proposals. SSA’s consultation with the medical community (physicians and other sources of medical evidence used to evaluate disability claims) also has been limited. These stakeholders are critical, as they represent the basic source of most of the information that states use to evaluate an individual’s disability. One of the key savings that SSA anticipates from AeDib is based on physicians and other medical sources electronically transmitting or faxing medical evidence that is now mailed to the DDSs. SSA has estimated that as much as 30 percent of all medical evidence could be faxed or electronically received from these providers, with the majority of it being faxed. In speaking with American Health Information Management Association officials in Georgia and Wisconsin, however, they expressed concern about the possibility that SSA will want medical providers to fax evidence. They cited the voluminous nature of much of the medical evidence that they send to the DDSs, and believe that faxing it would be too costly and not secure. Our review to date has not assessed the validity of the concerns expressed by the stakeholders, or SSA’s responses to them. Nonetheless, as long as such concerns exist, SSA must be diligent in pursuing a mutually agreed- upon understanding with its stakeholders about its vision and plan of action being pursued. SSA’s success in implementing AeDib depends heavily on resolving all outstanding issues and concerns that could affect the use and, ultimately, the outcome of the intended electronic capability. Without stakeholders’ full and effective involvement in AeDib’s planning and development, SSA cannot be assured that the system will satisfy critical disability process requirements and be used as intended to achieve desired processing efficiencies and improved delivery of services to beneficiaries. To summarize, Mr. Chairman, in moving toward an electronic disability process, SSA has undertaken a positive and very necessary endeavor. Having the means to more effectively and efficiently provide disability benefits and services is essential to meeting the needs of a rapidly aging and disabled population, and we applaud the Commissioner’s determination and proactive pursuit of this service-delivery enhancement. Nonetheless, SSA’s accelerated strategy may involve risks of delivering a system that will not sufficiently address its needs. The execution of critical pilot tests that are not scheduled for completion until December or later, coupled with the lack of planned end-to-end testing and a comprehensive assessment of risks, may prevent SSA from delivering an information technology capability based on sound and informed decision making. Moreover, uncertainties about the successful outcome of this project are exacerbated by concerns that key stakeholders in the disability process continue to have. Given the importance of this project to SSA’s future service-delivery capability, it is essential that the agency satisfy itself that AeDib will perform as intended with minimal risk before it is deployed nationwide. We will continue to monitor SSA’s progress on this initiative as part of our ongoing review. This concludes my statement. I would be happy to respond to any questions that you or other members of the Subcommittee may have at this time. For information regarding this testimony, please contact Linda D. Koontz, Director, or Valerie Melvin, Assistant Director, Information Management Issues at (202) 512-6240. Other individuals making key contributions to this testimony include Michael Alexander, Tonia D. Brown, Derrick Dicoi, and Mary J. Dorsey. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Providing benefits to disabled individuals is one of the Social Security Administration's (SSA) most important service delivery obligations--touching the lives of about 10 million individuals. In recent years, however, providing this benefit in a timely and efficient manner has become an increasing challenge for the agency. This past January, in fact, GAO designated SSA's disability programs as highrisk. Following a prior unsuccessful attempt, the agency is now in the midst of a major initiative to automate its disability claims functions, taking advantage of technology to improve this service. Seeking immediate program improvements, SSA is using an accelerated approach--called AeDib--to develop an electronic disability claims processing system. At the request of the Subcommittee on Social Security, House Committee on Ways and Means, GAO is currently assessing the strategy that underlies SSA's latest initiative to develop the electronic disability system. For this testimony, GAO was asked to discuss its key observations to date regarding the AeDib initiative, including strategy, risks, and stakeholder involvement. GAO plans to discuss more fully the results of this continuing review in a subsequent report SSA's goal to establish a more efficient, paperless disability claims processing system is important, and one that could benefit millions. To achieve this goal, SSA's immediate focus is on developing an electronic folder to store claimant information and large volumes of medical images, files, and other documents that are currently maintained in paper folders, and then make this information accessible to all entities involved in disability determinations. SSA's accelerated strategy calls for development of this capability by January 2004 rather than in 2005, as originally planned. Since accelerating this effort, SSA has performed important tasks toward establishing this initial electronic capability. Nonetheless, it has substantial work to accomplish in order to develop the technologically complex electronic folder and begin implementation by late next January. While responsive to the agency's need for an operational system as soon as possible, SSA's accelerated strategy involves risks. For example, pilot tests that are to provide important information about the electronic folder's performance are not expected until late December--just 1 month before its planned implementation. In addition, a strategy for end-to-end testing to demonstrate that the individual components will work together reliably has not been completed. Further increasing the system's vulnerability is that SSA has not yet comprehensively assessed project risks. Unless addressed, these factors could ultimately derail the initiative. While SSA has taken steps to involve key stakeholders in the systems development process, officials in state Disability Determination Services offices that we contacted expressed concerns that they had only limited involvement in the development effort. They stated that their concerns were not adequately heard and considered in the decision-making process. Unless SSA addresses these issues, it cannot be assured of stakeholder agreement with and full use of the system.
At the height of the Cold War, the United States envisioned a force of over 400 heavy bombers to deter against the Soviet nuclear threat and to be prepared to launch long-range nuclear strikes. The end of the Cold War, marked by the breakup of the Soviet Union and negotiation of strategic arms limitations treaties, drastically reduced requirements for long-range bombers and resulted in a shift of the bombers’ primary role from nuclear to conventional missions. Since the early 1990s, Department of Defense (DOD) and the Air Force have reduced the size of the bomber force, begun to implement a new concept of operations to use bombers in conventional conflicts, and embarked on a program to upgrade the bombers’ conventional capabilities. The U.S. heavy bomber force consists of B-2s, B-1Bs, and B-52Hs. DOD plans to retain all three types of bombers well into the 21st century. Each type has a unique history that has been shaped in part by significant congressional interest in bomber issues. We have issued numerous reports on bomber issues in response to congressional concerns; these reports are listed at the end of this report. In 1978, DOD began to design the B-2 as a stealthy bomber to penetrate enemy defenses for both nuclear and conventional missions. The B-2 is a two-crew aircraft that incorporates stealth (low-observable) technologies to enhance survivability. In 1981, the Air Force planned to buy 132 B-2 aircraft, but the 1994 Defense Authorization Act limited the procurement to 20 aircraft with a cost ceiling of $28.968 billion in fiscal year 1981 constant dollars. The 1996 Defense Authorization Act removed this cost ceiling, and the Congress made available an additional $493 million that will be used to convert the first B-2 test aircraft into an operational B-2. Today, 21 aircraft are planned at a cost of about $45 billion in then-year dollars. The first B-2 was delivered in 1989, and the last block 30 aircraft is scheduled to be completed in 2000. The contractor will deliver the B-2s in three configurations (referred to as blocks 10, 20, and 30), and each successive block possesses improved capabilities. By 2000, the Air Force plans to have 21 B-2s in the block 30 configuration in its inventory. In 1970, the Air Force began to develop the B-1 bomber for strategic nuclear missions as a high-speed aircraft designed to penetrate Soviet airspace and evade Soviet radar by flying low to the ground. The B-1 program experienced difficulties from its inception, and in 1977, the program was canceled. But, in 1981, DOD revived the B-1 program, approving production of the B-1B to be part of a two-bomber program to replace the aging B-52 fleet. The B-1B was intended to serve as a penetrating bomber until the B-2 bomber was deployed in the 1990s, at which time the B-1B was expected to assume a standoff role. The first squadron of B-1Bs became operational in October 1986. The contractor delivered the 100th and final B-1B in May 1988. As a result of crashes, 95 B-1Bs remain. Throughout its existence, the B-1B has had technical problems, particularly with its defensive avionics system. B-52 bombers, which were first introduced in 1954, were produced in eight configurations (A through H) with the last H aircraft delivered in October 1962. While 744 B-52s were built, only 94 remain. During the decades of the Cold War, B-52s were dedicated primarily to deterring nuclear war. However, B-52Gs were the first missile-capable B-52 bombers and were used in conventional roles in Vietnam and the Persian Gulf. During Operation Desert Storm, B-52Gs dropped approximately one-third of the total tonnage of bombs delivered by U.S. air forces striking wide-area troop concentrations, fixed installations, and bunkers and are credited with destroying the morale of Iraq’s Republican Guard. Following Desert Storm, the Air Force retired the B-52Gs and provided B-52Hs with enhanced conventional capabilities. While the 744 B-52s originally cost a little over $4.5 billion (an average unit cost of $6.1 million), over $41 billion has been spent over more than 40 years for their development, procurement, modernization, and service life extension. On the basis of engineering studies, the Air Force estimates that the B-52H will be structurally sound until about 2030. Since 1992, DOD and the Air Force have completed four major studies that have addressed bomber requirements—the Nuclear Posture Review, the Bottom-Up Review (BUR), the Air Force Bomber Roadmap, and the congressionally mandated 1995 Heavy Bomber Force Study. On the basis of these studies, DOD plans to make changes (shown in table 1.1) to the bomber force structure by 2001. Of the planned operational aircraft, 130 bombers will be available for conventional and nuclear missions and 24 will be used for training. The remaining 33 aircraft are test and backup aircraft. The Air Force has chosen to fully fund the operation of only 60 B-1Bs for the next few years, compared with plans to fund 82 beyond fiscal year 2000. In the short term, the Air Force has classified 27 of 95 B-1Bs as “reconstitution aircraft.” These aircraft are not funded for flying hours and lack aircrews, but they are based with B-1B units, flown on a regular basis, maintained like other B-1Bs, and modified with the rest of the fleet. B-1B units will use flying hours and aircrews that are based on 60 operational aircraft to rotate both the operational aircraft and the reconstitution aircraft through its peacetime flying schedule. However, because the Air Force has chosen not to fund aircrews for its reconstitution reserve aircraft, placing aircraft in reconstitution reserve reduces the number of aircraft the Air Force can support during wartime. In fiscal year 1997, the Air Force plans to begin reducing the number of reconstitution reserve aircraft by establishing two additional squadrons of B-1B aircraft and funding additional aircrews and flying hours. Since the Cold War ended, DOD has transferred some long-range bombers to the Air Force reserve components for the first time. In 1994, the Air Force Reserves and Air National Guard established 1 B-52H squadron with 8 aircraft and 1 B-1B squadron with 10 aircraft. The Air National Guard will establish one additional B-1B squadron of eight aircraft in the near future. All bombers will be based in the continental United States. The Air Force plans to expand the number of B-1B bases from three to five beginning in fiscal year 1996. Specifically, the Air Force plans to move six B-1Bs to Mountain Home Air Force Base in Idaho and establish a new Air National Guard squadron of B-1Bs at Robins Air Force Base in Georgia. Another Air National Guard squadron of B-1Bs is located at McConnell Air Force Base in Kansas. Figure 1.1 shows the locations of the future bomber force. In 1991, the President of the United States took the bombers off nuclear alert status. Subsequently, in January 1993, the Presidents of the United States and the Russian Federation signed the Strategic Arms Reduction Treaty (START) II building on agreements reached in START I signed July 1991. The treaty sets equal ceilings on the number of nuclear weapons that can be deployed by either party. If ratified by both countries, the START II treaty would reduce the deployable nuclear warheads to no more than 3,500 by the year 2003. In assessing bomber requirements in light of the new limits, DOD plans to remove the B-1B from the nuclear role. The B-2s and B-52Hs will retain the nuclear mission. B-52Hs assigned to the Air Force Reserve remain available for nuclear missions, but they will be flown by active duty pilots if assigned nuclear missions. According to the Air Force Bomber Roadmap, bombers will provide the majority of the firepower during the initial and sustained operations phases of major regional conflicts. From bases in the United States, the Air Force expects the bombers to fly long duration, round-trip missions of up to 36 hours to make initial attacks within 24 hours of being tasked. Within a few days of the start of a conventional conflict, bombers will be expected to deploy to forward locations for sustained operations, flying shorter and more frequent missions. The goal of the bomber missions will be to halt invading enemy armored forces and disrupt the enemy’s ability to wage war by attacking time-critical targets quickly, using a combination of direct attack and standoff munitions. Some bombers deployed to a major regional conflict will be expected to swing to a second regional conflict if needed. Each bomber will play a different role in a major regional conflict. The Air Force envisions the B-2 as the leading edge of the initial response to conflict because of its projected stealthiness and weapons delivery precision. The B-2 will be expected to fly into heavily defended areas to attack highly valued targets as well as enemy ground troops. The Air Force will assign both standoff and penetrating missions to the B-1B in medium-to-high threat environments and will expect the B-1B to destroy the bulk of the defended, time-critical targets early in the conflict using direct attack and standoff munitions. The B-52H will be primarily a standoff bomber in the early phases of conflict, using precision-guided munitions such as conventional air-launched cruise missiles, and will provide massive firepower by directly attacking targets in low- to medium-threat environments using munitions such as the Joint Direct Attack Munition. Figure 1.2 shows the Air Force’s planned employment of the bombers. In addition to defining the new concept of operations for bombers, the Air Force’s 1992 Bomber Roadmap established an investment strategy to enhance the conventional capabilities of the bombers. The study recognized that all three bombers currently have limited conventional capabilities, the B-1B defensive avionics system needs to be upgraded, and the B-2 and B-1B bombers lack sufficient mobility readiness spares packages to meet wartime requirements. The 1992 Bomber Roadmap estimated B-1B and B-52H upgrades would cost about $3 billion. The costs to integrate conventional munitions on the B-2 are included in the B-2 program cost. In 1993, we concluded that B1-B upgrade costs were underestimated by billions of dollars because they did not include costs to fix B-1B operational problems, acquire an effective B-1B defensive avionics system, and acquire adequate mobility readiness spares packages. B-2 modifications involve integrating conventional munitions on the aircraft and developing a deployable mission planning system to accommodate rapid changes in scenarios and mission routes. The block 10 B-2, currently in the Air Force’s inventory, can carry only gravity bombs, but after all modifications are complete, it will be able to carry additional gravity weapons and some advanced munitions. The B-1B currently can drop only gravity weapons and, because of problems with its defensive avionics system, would be limited to low-threat environments. The Roadmap’s B-1B Conventional Munitions Upgrade Program addresses these shortfalls in a phased approach. By 1997, the aircraft will be certified to use a family of cluster munitions, but its capability to employ advanced direct attack and standoff precision munitions will not be available until after 2000. Also, the defensive avionics system upgrade will not be completed until well into the next decade. The B-52H requires the least amount of funding to upgrade its conventional capabilities and is and will continue to be the most versatile bomber in the fleet. It is the only standoff bomber in the inventory today, and in the future, still will carry more types of weapons than either the B-1B or the B-2. Appendix I includes a description of the munitions planned for the bombers. Table 1.2 shows the current and future munitions carrying capabilities of the three bombers. The Chairman of the House Budget Committee requested that we evaluate the basis for DOD’s bomber force structure requirements, assess Air Force’s progress to implement its new conventional concept of operations for using bombers, and determine the cost to keep the bombers in the force and enhance their conventional capabilities. As part of this review, we also identified and assessed the potential cost savings and effects on military capability of four alternatives for reducing bomber costs, including retiring or reducing the B-1B force, as well as the need for procuring additional B-2s if the B-1B force is reduced or retired. To assess the basis for the number of bombers in DOD’s planned force structure, we reviewed documents supporting the four major DOD bomber requirements studies. We discussed major study assumptions with Joint Chiefs of Staff, Office of the Secretary of Defense, Air Force, and Institute for Defense Analysis (IDA) officials to understand the significance to the study conclusions. We compared the assumptions with current defense guidance, the new bomber concept of operations, and information obtained from war-fighting commanders in chief (CINC) concerning their plans for bomber operations. Also, we assessed bomber contributions to two major regional conflicts by analyzing (1) DOD’s database used in the Capabilities Based Munitions Requirements development process and (2) the results of Air Force modeling of recent DOD wargaming of the two major regional conflict scenario. In evaluating the number of bombers required for the nuclear mission, we discussed the nuclear force structure options and major study assumptions included in the Nuclear Posture Review with Office of the Secretary of Defense, U.S. Strategic Command officials, and Air Force officials. To assess Air Force progress in implementing the concept of operations for bombers, we evaluated Air Force documents on a range of factors that are critical to effective implementation of the concept, such as the sufficiency of mobility readiness spares packages and bomber staffing levels, the operational readiness of the bombers, and technical challenges to modify the bombers for the conventional mission. We also reviewed our prior reports and those of DOD and others addressing these factors, and we discussed them with CINC staff, Air Force headquarters, Air Combat Command, and bomber unit officials to understand their significance. To determine the cost to keep the bombers in the force and modify them, we obtained and analyzed investment and operational and support costs related to the bomber force from DOD’s Fiscal Year 1997 Future Years Defense Program (FYDP). We obtained and analyzed Air Force documents on the cost to modernize the bombers beyond the FYDP. We compared these costs with those reported in the 1995 DOD Heavy Bomber Force Study to identify any significant differences. On the basis of our assessment of DOD’s bomber requirements and force structure plans, we developed four alternatives to the planned B-1B bomber force structure and assessed the costs and risks associated with each one. In identifying options for smaller bomber forces, we limited our analysis to B-1B alternatives because the B-1B will play no role in the nuclear mission and therefore seems a more logical candidate for downsizing than either the B-52 or the B-2. Also, we examined placing 24 more B-1Bs in the Air National Guard because this would result in a 50/50 active/reserve ratio and the Air Force has placed 50 percent or more of some refueling and air mobility assets in the reserve component. We asked the Congressional Budget Office to estimate the budgetary savings of the alternatives and discussed the risks associated with the alternatives with Office of the Secretary of Defense, U.S. Strategic Command, and Air Force officials. Because DOD and the Congress have considered the need for additional B-2s beyond the planned force in recent years and our options to retire or reduce the B-1B force may raise further questions about the need for additional B-2s, we assessed their need in light of the estimated cost of more B-2s and DOD’s aggregate conventional and nuclear war-fighting capabilities. We reviewed and compared cost estimates for 20 additional B-2s developed by DOD, the B-2 contractor, the Congressional Budget Office, and IDA. To assess the impact of more B-2s on DOD’s conventional war-fighting capabilities, we reviewed studies by IDA, the Congressional Budget Office, and several private organizations and compared their methodologies and key assumptions. We also assessed the contributions of B-2s by analyzing the types and number of targets assigned to B-2s in DOD’s 1995 Heavy Bomber Force Study and DOD’s Capabilities Based Munitions Requirements development process. To assess the impact of more B-2s on DOD’s nuclear force, we discussed the need for additional B-2s with U.S. Strategic Command officials and obtained their assessment of how additional B-2s would affect compliance with nuclear warhead carrying capability limits included in the START II. We performed our review at the Office of the Secretary of Defense; Joint Chiefs of Staff; Air Force Headquarters; the National Guard Bureau; IDA; the United States Central Command; the Central Command Air Forces; the U.S. Pacific Command; the U.S. European Command; the U. S. Strategic Command; the Air Combat Command; the 2nd Bomb Wing, Barksdale Air Force Base, Louisiana; the 28th Bomb Wing, Ellsworth Air Force Base, South Dakota; and the 509th Bomb Wing, Whiteman Air Force Base, Missouri. We conducted this review from November 1994 through May 1996 in accordance with generally accepted government auditing standards. DOD has not demonstrated convincingly that it needs to retain 187 bombers to meet war-fighting requirements. According to a major DOD study of nuclear requirements completed in 1994, only about 45 percent of DOD’s planned bomber force—66 B-52s and 20 B-2s—will be needed for the nuclear role. DOD’s decision to maintain an overall force of 187 bombers was shaped largely by three key DOD and Air Force studies—the BUR, the 1995 Heavy Bomber Force Study, and the Air Force Bomber Roadmap. None of the studies fully addresses the Commission on Roles and Missions concern that DOD may have more ground-attack capability than it needs or assesses whether other less costly alternatives exist to accomplish missions that would likely be assigned to bombers. Moreover, in concluding that DOD would need up to 100 bombers for a major regional conflict, the three studies assume that CINCs will use significantly more bombers in future conflicts. In addition, the Air Force’s principal study of bomber requirements—the Bomber Roadmap—appears to have overstated bomber requirements by assuming that a significant portion of the bomber force will need to be reserved solely for nuclear missions, although DOD has taken bombers off nuclear alert and considers all bombers to be available for conventional operations. Our analysis shows that DOD has extensive, overlapping capabilities to conduct ground attack. While DOD needs a level of redundancy and overlap to provide CINCs with a safety margin and flexibility, it may not need to upgrade its capabilities to the extent currently planned. Despite recent downsizing, the services continue to operate about 5,900 advanced fixed-wing combat aircraft and helicopters, as well as other advanced airpower assets that will be used to attack the same types of targets as bombers during conventional conflicts. Although bombers are unique in that they carry large quantities of munitions over long distances, they do not provide a unique contribution to destroy most types of targets they would likely be assigned. In response to a finding by the congressionally chartered Commission on Roles and Missions of the Armed Forces that DOD may have more ground attack capability than it needs, DOD is reassessing its requirements for ground attack assets, including bombers, across the services. In 1994, DOD conducted the Nuclear Posture Review, the first such review in 15 years, to determine the number of bombers needed for the nuclear mission assuming that START I and II agreements would be implemented by 2003. The review concluded that the United States should retain 66 B-52Hs and no more than 20 B-2s for the bomber leg of the nuclear triad after analyzing several combinations of ballistic missile submarines, intercontinental ballistic missiles, and bombers that, together, could carry 3,500 warheads stipulated as the maximum allowable warhead carrying capability in START II. DOD tentatively plans to allocate 1,320 of these warheads to the bomber force. The review also concluded that B-1Bs were not needed for the nuclear role, and according to DOD officials, did not specify that any bombers be dedicated solely to the nuclear mission. In mid-1995, DOD determined that it would reduce its B-52H force from 94 to 66 and limit the number of B-2s to 20, consistent with the results of the Nuclear Posture Review. However, DOD subsequently decided to maintain 71 B-52Hs and convert the first B-2 test aircraft to an operational aircraft for a total of 21 B-2s. Although DOD plans to retain a larger number of B-52Hs and B-2s than previously planned, the decision to retain more aircraft was not prompted by a need for a larger nuclear force structure. According to Air Force officials, the Air Force decided to increase the B-52H force to provide a larger attrition reserve force to hedge against potential future losses of B-52Hs. Moreover, the 21st B-2 is being procured because the Congress made available an additional $493 million in fiscal year 1996 for the B-2 program. Although they may not be needed for the nuclear mission, the carrying capability of these additional aircraft will count toward the START II limits. In order to stay within treaty limits if the treaty is ratified, the Air Force plans to modify some B-52Hs so that they can carry fewer than their maximum capability of 20 warheads. Although none of the studies (BUR, the Air Force Bomber Roadmap, and the 1995 Heavy Bomber Force Study) concluded specifically that DOD should maintain 187 bombers, taken together, they played a major role in DOD’s decision to keep 187 bombers in the force and modify them for the conventional role. However, all three studies have significant limitations that may overstate DOD’s need for bombers. For example, none of the studies assessed the cost-effectiveness of bombers compared with that of other deep attack assets (such as tactical fighter aircraft and missiles) in DOD’s inventory. In addition, BUR did not adequately consider the potential contributions of precision-guided weapons and new weapon systems in development. Moreover, the Bomber Roadmap used some questionable assumptions. For example, it assumed that (1) bombers would be the only assets available during the initial days of a conflict to attack time-critical targets and (2) a significant number of bombers would need to be dedicated solely to nuclear missions. In concluding that about 100 bombers would be needed for the first major regional conflict, all three studies assumed that CINCs would use significantly more bombers than they plan to use today and deploy them earlier in future conflicts. However, this assumption appears questionable because DOD currently categorizes its ability to execute the two major regional conflict strategy as adequate and our analysis of DOD data shows that the threat is not expected to increase significantly within the next decade. BUR, completed in 1993, concluded that 100 bombers would be adequate for a major regional conflict and that some of these bombers would shift to a second conflict if needed. BUR further concluded that a total inventory of up to 184 bombers was needed to meet nuclear and conventional requirements. Joint Chiefs of Staff and Office of the Secretary of Defense officials told us that BUR’s conclusion that 100 bombers would be adequate for a major regional conflict was based on several factors—including the number of bombers used in Desert Storm and military judgment. However, DOD did not conduct detailed analysis or modeling to determine how a range of alternative bomber forces would fare in the context of two nearly simultaneous major regional conflicts. Moreover, DOD did not examine the cost-effectiveness of using bombers to destroy ground targets compared with the cost-effectiveness of using other deep-attack assets. In 1995, we reported on BUR’s methodology and concluded that DOD had not fully analyzed key BUR assumptions about the availability of forces, supporting capabilities, and force enhancements needed to execute the two major regional conflict strategy. BUR assumed that some specialized assets such as bombers would swing to a second major regional conflict, but as noted in our prior report, DOD did not analyze the specific types and numbers of assets that would swing, the timing of the swing, or logistical requirements. Also, BUR projected force requirements only to the 1999 time frame, prior to the completion of bomber modifications and the fielding of many new precision weapons (such as the Joint Direct Attack Munition and Joint Standoff Weapon) that should greatly improve fighter and bomber effectiveness and potentially reduce the number of bombers and fighters needed to fight two major regional conflicts. The Air Force Bomber Roadmap—first published in 1992 and updated in 1995—established the Air Force’s conventional concept of operations for bombers to provide initial attacks and sustained firepower for major regional conflicts and identified and set into a motion a bomber modernization plan to upgrade the bombers’ conventional capabilities. The Roadmap established a requirement for 210 bombers, 23 more than DOD plans to retain in the force, through 2004 as shown in table 2.1. DOD has decided to keep only 187 bombers in the force because it considers other programs that compete with bombers for the Air Force’s share of projected budgets to be higher priority. However, in 1995, the Commander of the Air Combat Command, who is responsible for developing the Roadmap, testified that, on the basis of the Air Force’s analysis, he believed DOD’s planned force may be too small. Our analysis of the Bomber Roadmap showed that it may overstate requirements because it included three questionable assumptions. First, the Air Force accepted the BUR’s conclusion that 100 deployable bombers would be needed for a major regional conflict without conducting detailed modeling to validate this number. Second, the Air Force identified a requirement to dedicate 66 bombers for the nuclear mission even though DOD has removed bombers from nuclear alert and considers all bombers available for conventional missions. Third, the Air Force assumed that only bombers would be available to strike a notional set of over 1,250 time-critical target elements (aim points for about 240 targets) based on the military’s experience in Desert Storm. The Roadmap analysis showed that the current bomber force could strike only about 24 percent of the time-critical target elements in the first days, but, in 2001, upgraded bombers will be able to strike all of the target elements. With respect to the third issue, the Air Force did not take into account the contributions of other deep attack assets (such as Air Force and Navy tactical fighters and missiles) that could attack some of these same targets. We pointed out this shortcoming in our 1993 report on DOD’s bomber modernization plan. In response to our report, DOD responded that the Bomber Roadmap was not a coordinated DOD-wide effort, but an Air Force plan for equipping bombers. The 1995 updated Roadmap again did not address this shortcoming, even though current DOD planning guidance assumes that Air Force and Navy tactical aircraft would arrive early enough in theater to attack targets during the halt phase of a major regional conflict. The National Defense Authorization Act for Fiscal Year 1995 and the DOD Appropriations Act of 1995 required DOD to study bomber requirements and provide an independent cost-effectiveness analysis of Air Force bomber programs. The overall objective of the study was to assess bomber force requirements (on the basis of Defense Planning Guidance) for two nearly simultaneous major regional conflicts in 1998, 2006, and 2014, and to analyze the cost-effectiveness of alternative Air Force bomber forces in achieving U.S. military objectives. DOD contracted with IDA, a Federally Funded Research and Development Center, for the study. IDA used DOD’s then-projected force structure of 182 bombers, Defense Planning Guidance scenarios, and DOD planning factors for force deployments, and weapons inventories for each of the 3 years as its baseline case to analyze and compare the cost-effectiveness alternative bomber forces. The study also analyzed excursions from the Defense Planning Guidance, including shorter warning times, delayed arrival times for U.S. forces, fewer available tactical aircraft, and improved enemy threats. To assess the cost-effectiveness of alternative bomber force mixes, IDA modeled five bomber force structures ranging from a small force of 115 bombers to a large force of 210 as shown in table 2.2. The number of bombers shown is the total aircraft inventory. The actual number of bombers that DOD assumed would deploy for each alternative in the study is classified but is less than the total inventory. On the basis of the results of IDA’s analysis, DOD concluded that (1) the planned bomber force can meet the national security requirements of two nearly simultaneous major regional conflicts for anticipated scenarios and reasonable excursions and (2) planned conventional mission upgrades to the B-1B force are more cost-effective than procuring additional B-2s. IDA’s analysis showed that the United States would win two nearly simultaneous major regional conflicts for all the options modeled. However, the study concluded that DOD’s planned force of 182 bombers was more cost-effective than other options, including the two smaller bomber forces modeled. While the Heavy Bomber Force Study is the most comprehensive of the DOD and Air Force studies to date, it has one key shortcoming. Like the other studies discussed, this study did not examine whether tactical fighters or long-range missiles could accomplish the mission more cost-effectively than bombers. Bomber force structure size varied for each of the options, whereas other deep attack forces such as tactical fighters remained constant. Although the three major studies of bomber requirements concluded that military commanders would need about 100 bombers for a major regional conflict, the CINCs currently plan to use far fewer than 100 bombers to implement their war plans. The number of bombers included in the CINCs’ current war plans may be smaller than DOD envisions in part because DOD has fewer bombers in its inventory today that are funded for combat operations and because the B-1Bs currently have limited conventional capabilities. Once the bombers are upgraded, the CINCs might choose to include more bombers in their plans than they would today. However, none of the CINCs’ representatives we spoke with expressed concern that the smaller number of bombers in DOD’s current inventory was a limiting factor that would adversely affect the outcome of a campaign. Additionally, one CINC’s current war plan would not require bombers to deploy as early as envisioned by DOD and Air Force studies. How quick bombers deploy to forward operating locations would depend on the CINCs’ priority for airlift. In 1995, the Congressional Budget Office pointed out in its analysis of bomber force options that, even in a conflict with little warning, it is unlikely that CINCs would divert airlift to forward deploy bombers in lieu of other forces. The CINCs would likely use available airlift to rush more flexible tactical aircraft and ground forces to the theater while using bombers for operations from bases within the United States at reduced sortie generation rates. The services have numerous, overlapping ways to attack ground targets in major regional conflicts and have concluded that they have enough capability to carry out the national military strategy. CINCs plan for redundant target coverage when assigning targets to the services and often have many ways to attack targets using various combinations of weapons and platforms. Moreover, planned enhancements will increase DOD’s capabilities substantially over the next several years, particularly its capabilities to attack ground targets. DOD has numerous other ways to attack targets that would likely be assigned to bombers in conventional operations. Although DOD has reduced its total combat aircraft by almost 30 percent since the Persian Gulf War, the military services continue to operate over 5,900 fighter and attack aircraft and helicopters. Aircraft are increasingly being supplemented by other advanced combat airpower assets, such as long-range cruise missiles, unmanned aerial vehicles, and theater air defense forces. Many of these assets will be used to interdict enemy ground targets—one of the principal missions for which bombers are being maintained and upgraded. Table 2.3 identifies other airpower assets that are assigned the interdiction mission. We reviewed DOD’s plans to modernize its numerous combat airpower assets and concluded that some of DOD’s airpower modernization programs will add only marginally to the already formidable capabilities and some should be reconsidered from a joint perspective. We concluded that, although some redundancy is needed to provide the CINCs with operational flexibility, DOD may have more than ample capability to perform such missions. In May 1995, the congressionally mandated Commission on Roles and Missions of the Armed Forces also concluded that DOD may have greater quantities of strike aircraft and other deep attack weapons than it needs. CINCs routinely apportion more than 100 percent of the targets to the services to provide a safety margin and ensure flexibility. For example, we previously reported that one CINC assigned the Army 5 to 10 percent, the Navy 20 to 30 percent, the Marines 15 to 25 percent, and the Air Force 65 to 75 percent of one target type—a total apportioned range of 105 to 140-percent coverage—even though the CINC’s objective was to destroy only 80 percent of the target quantity. Therefore, even if the services can achieve only the low end of the total apportioned range (105-percent coverage), the 80-percent destruction goal will be met. This over-apportionment creates a margin of safety and allows flexibility to ensure targets will be hit even if some expected capabilities are not available. However, it also establishes the expectation that the services will acquire and maintain sufficient forces to provide this level of target coverage. Figure 2.1 shows the CINC’s total apportionment of targets to the services compared with the CINC’s destruction objective for selected targets identified for one major regional conflict. (Providing specific target names would require the figure to be classified.) CINC Objective Our analysis of DOD’s Capabilities Based Munitions Requirements database for two major regional conflicts in 2002 shows that the services have numerous ways to strike ground targets that may be assigned to bombers. This database consists of Defense Intelligence Agency ground target data for the two major regional conflict scenario, and in conjunction with CINC allocations of targets to the services, is used in DOD’s computation of munition requirements. It includes both strategic and interdiction targets, which are the bombers’ principal targets. Strategic targets are those vital to the enemy’s war-making capacity and may include manufacturing systems, communications facilities, and concentrated enemy armed forces. Interdiction targets are those ground targets generally beyond the close battle and commanders interdict these targets to divert, disrupt, or destroy them before they can effectively be used against friendly forces. We analyzed strategic and interdiction targets assigned to the Air Force to determine whether there were any bomber-unique target types (considering all Air Force aircraft but excluding other services’ assets that may also be assigned to hit the same types of targets as bombers). We found three bomber-unique targets in the first conflict and eight in the second conflict as shown in table 2.4. The B-2 and B-1B unique targets types were strategic targets. Most of the B-52H unique target types also were strategic targets. However, when considering all of the services’ ground attack assets, Air Force modeling of the two major regional conflict scenario showed that there were no unique bomber target types. In response to a May 1995 recommendation from the Commission on the Roles and Missions, DOD initiated a Deep Attack Weapons Mix Study to assess deep attack requirements across the services. The Commission recommended that DOD conduct a DOD-wide cost-effectiveness study to determine the appropriate number and mix of deep attack capabilities currently fielded and under development by all services. The President of the United States has directed that the study examine trade-offs between long-range bombers, land- and sea-based tactical aircraft, and missiles that are used to strike the enemy’s rear. The President also directed that it focus on the potential that the growing inventory and the increasing capabilities of weapons could allow some consolidation of the ships, aircraft, and missiles that will deliver them. The first part of the study, to be completed in late 1996, will analyze weapons mix requirements for DOD’s planned force in 1998, 2006, and 2014 and determine the impact of force structure changes on the weapons mix. The second part of the study will analyze trade-offs among elements of the force structure, such as bombers and tactical aircraft, for the same years and is to be completed in early 1997. In May 1996, we recommended that DOD should routinely review service modernization proposals based on how they will enhance DOD’s current aggregate capabilities and that such analyses should serve as the basis for deciding funding priorities. Moreover, in a recent testimony, we concluded that such assessments should (1) assess total joint war-fighting requirements; (2) inventory aggregate service capabilities, including the full range of available assets; (3) compare aggregate capabilities to joint requirements to identify excesses or deficiencies; (4) assess the relative merits of retiring alternative assets, reducing procurement quantities, or canceling acquisition programs where excesses exist or where substantial payoff is not clear; and (5) determine the most cost-effective means to satisfy deficiencies. DOD has not made a compelling case that it needs to maintain and upgrade 187 bombers in light of the services’ already extensive and overlapping capabilities to attack ground targets. Because the studies do not adequately consider the potential that DOD may need to reduce its overall ground attack capabilities and other airpower assets may be more cost-effective in providing ground attack than bombers, they do not provide a sound basis for DOD’s conclusion that it needs 187 bombers. Once the bombers are upgraded, their contribution to conventional conflicts may be smaller than assumed by the studies if the CINCs maintain their plans to use fewer than 100 bombers for a major regional conflict and do not place higher priority on airlifting bombers to forward operating locations. DOD’s Deep Attack Weapons Mix Study will provide DOD with an opportunity to address the methodological shortcomings of its prior studies and identify options to reduce some of its extensive ground attack capabilities, including bombers. The success of this study depends on how well DOD components will be able to work together to produce an objective analysis of DOD’s airpower and weapons requirements that results in a force that is both adequate and affordable within the context of projected DOD budgets. The Air Force faces significant challenges in implementing its conventional concept of operations for bombers established by the Bomber Roadmap. The Air Force’s ability to implement the concept depends on its ability to successfully complete its bomber modernization program, achieve and maintain an acceptable mission capable rate, and ensure that the bombers can sustain operations from forward operating locations. The B-2 has not demonstrated that it can meet some of its most important conventional mission requirements, and most B-1B modernization programs will not be completed until about 2006. The B-1B, which is expected to be the backbone of the conventional bomber force, has experienced difficulty in maintaining acceptable mission capable rates. Moreover, demonstrating the capability to operate at overseas locations poses a significant challenge for the B-2 and the B-1B, both of which were originally designed with limited conventional capabilities and deployment requirements. For example, limited mobility readiness spares packages for the B-2 and B-1B and shortages in some military occupations for the B-1B and B-52H may hinder the deployment and sustainability of these bombers. The Bomber Roadmap established a plan to upgrade the conventional capabilities of the bombers to enable them to deliver (1) additional types of unguided munitions currently in DOD’s inventory and (2) new high-altitude, all-weather precision munitions that DOD is developing for the bomber force and Air Force and Navy tactical aircraft. The plan also provides for defensive system upgrades for better protection against enemy air defense systems for the B-1B and new radios for all bombers to allow them to better communicate in the tactical environment. The B-52H modification program is almost completed. However, the B-2 and B-1B programs will not be completed until about 2000 and 2008, respectively. The Air Force faces significant technical challenges in completing the 21 B-2s authorized by the Congress, modernizing the B-1B, and demonstrating that they will meet operational requirements. The B-2’s principal mission changed from nuclear to conventional in late 1992 when the Air Force decided to incorporate precision-guided munitions on the bomber. Its operational requirements specify that the B-2 weapon system have low observable characteristics and sufficient range and payload to deliver nuclear or conventional weapons anywhere in the world requiring the blending of conventional and state-of-the-art technologies. This blending of aircraft technologies make the B-2 a complex and costly aircraft to develop and produce. In 1987, the Air Force gained approval to procure the B-2 concurrently with development and testing. The Air Force is accepting the B-2 in three configuration blocks with each new block acquiring additional capabilities that must be demonstrated in flight testing. The first B-2 deliveries are block 10 configurations for which flight testing has been completed. The block 10 configuration provides the Air Force with a training aircraft with limited combat capability. The block 20 configuration will include an interim precision strike capability not available in the block 10, and the block 30 B-2 will have additional precision strike capability. By 2000, the Air Force plans to have 21 block 30 B-2s. Since 1990, we have issued several unclassified reports on the Air Force’s progress and problems in fielding the B-2. In August 1995, we reported that the Air Force had not yet demonstrated that the B-2 could meet some of its important mission requirements and that the contractor had experienced difficulties in delivering B-2s that meet operational requirements. The report noted that B-2s were generally delivered late with significant deviations and waivers, but that the Air Force plans to correct all deficiencies as the aircraft undergo block modifications. Also, we found that flight testing has been slower than planned and that the Air Force’s projections for completing testing were optimistic. We estimated that the Air Force may need an additional 55 aircraft test months to complete the planned flight testing. As of April 1996, the Air Force had completed 75 percent of the flight testing; it plans to complete flight testing by July 1, 1997. However, given the amount of flight testing that remains, the Air Force may not be able to meet this completion date. The Air Force has reduced the amount of flight testing planned and is assessing further reductions in order to meet the planned completion date. Early test results have identified potential problems in the B-2’s ability to meet some important mission requirements. For example, achieving acceptable radar signatures, the most critical stealth feature needed for B-2 operational effectiveness, has been a problem. This problem resulted in the redesign and retesting of the test aircraft, and in redefinition of acceptable radar signatures for the block 10 configuration. Subsequently, the Air Force completed radar signature flight testing for the block 30 B-2 in March 1996, and characterized test results as generally meeting predictions. However, in some cases the radar signatures did not meet planned essential employment capabilities. The Air Force is analyzing the signatures that did not meet requirements to determine whether further design and testing is needed. Also, testing revealed problems with the software and radar system for the terrain-following and terrain-avoidance system needed for low-level flight. Additional problems may be found as the concurrent testing and manufacturing proceed, potentially resulting in the delivery of B-2s with limited operational capability or the need for modifications beyond the block 30 configuration, which would require additional funds to correct. The B-1B has had a history of problems and was fielded with some unproven systems that did not meet user requirements including the weapon, defensive avionics, and terrain-following systems. DOD has embarked on a three-phase Conventional Munitions Upgrade Program for the B-1B that will incrementally equip it with advanced precision-guided munitions and upgraded computer and defensive avionics systems. Phase I will equip the bomber with three types of the most modern family of cluster munitions, including the combined effects munition to attack soft area targets, mines to attack armor and personnel, and sensor-fuzed weapons to attack armor. Phase II will add global positioning system technology; upgrade communications, computer, and defensive avionics systems; and enable the B-1B to carry new near-precision, short-range munitions such as the Joint Direct Attack Munition and Wind-Corrected Munitions Dispenser. Phase III will provide the aircraft with standoff capability by integrating the Joint Standoff Munition. While most of the upgrades will be completed about 2006, the defensive avionics upgrades will not be completed until about 2008 (as shown in fig. 3.1). The Air Force has changed its plans to upgrade the B-1B computer and defensive avionics systems, which are crucial for integrating and employing precision munitions, because the planned computer upgrades would not fully meet operational requirements and the planned defensive avionics system was too costly. Upgrading computers and software is critical to enhancing the conventional capabilities of the B-1B. In 1995, we reviewed the Air Force’s plans to upgrade the B-1B’s computer and found that the Air Force had analyzed several options ranging from simply expanding the current system’s memory to installing new systems and software. Because of funding priorities, the Air Force initially chose to only upgrade the memory of the current system. We concluded that simply upgrading the memory would be inadequate because it would not fully support the planned conventional mission upgrades and operational requirements. In response to our report, the Air Force decided to increase funding to replace the existing computer and convert to new software. We further concluded that it is extremely important that the Air Force not revert to a computer upgrade approach for the B-1B based on cost alone but ensure that sufficient resources are allocated so that the computers support the planned B-1B conventional capability enhancements. The Air Force currently estimates that the computer upgrade design phase will be completed in January 1997 and the upgrades will be completed about the middle of fiscal year 2006. In 1988, the Air Force determined that the B-1B defensive avionics system was flawed and could not meet contract specifications. The specifications were relaxed to support the B-1B’s nuclear role as a low-altitude penetrator against Soviet air defenses. In 1992, the Bomber Roadmap noted that an effective defensive avionics system is more crucial for conventional missions because of the diversity and number of threats that the B-1B may encounter. In 1993, DOD began to evaluate defensive avionics systems requirements and alternatives and developed a two-phase approach to upgrade the defensive avionics system to incrementally add capabilities based on when enemy threat systems are expected to become operational. DOD planned for limited operational capability in 2003 and full operational capability in 2007. In 1995, the defensive avionics system upgrade was again redirected to another less costly two-phased approach that incorporates off-the-shelf components already being used on other aircraft and technology from other programs. The Air Force plans for the first phase to provide capabilities adequate for the threat expected through 2002 and the second phase to provide full capability against more advanced threats in 2008. The Air Force currently is modifying the operational requirements documents for the defensive avionics system and has not completed the required cost and operational effectiveness analysis for it. This analysis was initially to be completed in the fall of 1995, and the Air Force currently expects it to be completed in October 1996. In a December 1995 letter to the Secretary of the Air Force commenting on the conventional upgrade program, we noted that the B-1B was fielded with a defensive avionics system that did not meet user requirements in large part because testing was sacrificed to meet the schedule of fielding the system. We observed that the Air Force’s current plan appears to include an adequate testing program. However, we cautioned that the planned testing program needs to be maintained even if it means extending the program’s completion. It has historically been difficult for the B-1B force to maintain an acceptable mission capable rate. These rates directly impact the number of sorties that can be flown over a period of time. In the Defense Authorization Act for Fiscal Year 1994, the Congress expressed its concern about the low B-1B mission capable rate by requiring the Air Force to conduct a B-1B Operational Readiness Assessment to determine whether one B-1B wing could achieve and maintain the 75-percent mission capable rate for 6 months, if fully supported with personnel, spare parts, maintenance equipment, and logistical support. The Air Force conducted the assessment between June 1, 1994, and November 30, 1994, and issued its final report to congressional defense committees on February 28, 1995. We, at the direction of the Congress, monitored and reported on the assessment and found that it was complete and comprehensive and that the data it generated was credible. The Air Force reported that during the assessment, the 28th Bomb Wing achieved an 84-percent mission capable rate. At the end of the assessment, the rate for the entire B-1B fleet was about 65 percent. The report pointed out that the assessment showed that the B-1B support structure, if fully funded, could keep the B-1B in a mission capable status but that it was not a measure of B-1B’s effectiveness in executing assigned missions. For the 2 years prior to this assessment, the B-1B fleet mission capable rate averaged about 57 percent. The rates have improved over time and, in the first 6 months of fiscal year 1996, averaged about 72 percent. The Air Force concluded that, with an additional $11.2 million for management actions and reliability and maintainability improvements, the B-1B fleet has the potential to achieve and sustain a 75-percent mission capable rate by 2000 if already ongoing initiatives and continued funding for spare parts are completed. In response, the Congress included $11.2 million in the Air Force’s fiscal year 1996 budget to improve the B-1B’s mission capable rate. However, on the basis of our analysis of the operational readiness assessment, we reported that the $11.2 million estimate was optimistic and that the Air Force cannot predict how successful the ongoing or planned initiatives will be. Therefore, the potential cost to achieve and sustain a 75-percent mission capable rate is unknown. Significant challenges remain in demonstrating that the numbers of B-2s and B-1Bs envisioned for use in conventional conflicts will be able to operate from forward operating locations for sustained periods of time. For example, whereas nuclear missions require a single-sortie penetration of enemy airspace, conventional missions require repetitive sorties, the ability to deploy to forward operating locations relatively close to the conflict, and the ability to sustain operations for an extended period of time. Mobility readiness spares packages, which allow the bombers to operate from remote locations without resupply until a supply line is established, were not initially authorized for B-2 and B-1B units because they were not needed for the nuclear mission. Also, personnel requirements were geared primarily to nuclear operations. Officials at one war-fighting command told us that they raised concerns to the Air Force about the reliability, deployability, and supportability of the B-1B in developing their war plans and that they initially preferred the B-52H. These concerns related to B-1B’s historically low mission capable rate, insufficient mobility readiness spares packages, and personnel shortfalls. But, at the urging of the Air Force, the war-fighting command has included some B-1Bs in their war plans. Historically, the Air Force has equipped deploying aircraft units with mobility readiness spares packages that would support them in combat operations for a 30-day period without the need for resupply. This 30-day period allows time for the Air Force to establish a resupply system as airlift becomes more readily available. In 1993, we reported on adding conventional capabilities to the bombers and noted that 30-day packages were critical to sustaining B-52G operations in Operation Desert Storm.Currently, tactical fighter and B-52H units are authorized 30-day packages. However, the Air Force plans to provide B-2 and B-1B units with packages that will support them for only a 14-day period. Air Combat Command logistics officials responsible for managing the packages believe that the 14-day kits may not be adequate to sustain combat operations until resupply systems are in place. However, the Air Force has not funded 30-day packages because it views other programs as higher priorities. The Air Force has budgeted $98.1 million in the fiscal year 1997 FYDP to procure additional B-1B parts and equipment for the 14-day packages currently authorized. According to Air Force officials, this amount should fully fund these packages. The 1997 FYDP does not include funds for additional packages to support the additional B-1B units that the Air Force will establish with the reconstitution reserve aircraft. The Air Force, also plans to fund 14-day mobility readiness spares packages for B-2 units, using funds appropriated for the B-2 program. According to Air Force officials, the size and cost of the packages have not been determined yet because the Air Force has limited experience with the B-2 and cannot yet predict effectively what parts are likely to break and, therefore, should be included in the packages. The Air Force has formed a team of B-2 logisticians and maintenance personnel to determine the mobility readiness spares package requirements for the B-2. By 2000, the Air Force expects to be able to deploy 16 block 30 B-2s with 14-day packages. However, it is not clear that 14-day packages will be adequate, particularly given that some B-2s will be expected to swing to a second major regional conflict if the need arose. The Air Force currently cannot meet its war-fighting requirement to support the full complement of B-1B and B-52H bombers allocated to war-fighting CINCs because of personnel shortages in some occupational specialties, especially bomb assembly and bomb loading. The shortages will increase significantly in fiscal years 1999 to 2001 after the Air Force has established additional B-1B squadrons using the reconstitution reserve aircraft. By 2003, the Air Force estimates it will need about 1,600 more personnel than available (as shown in table 3.1). DOD did not include funding in the fiscal year 1997 FYDP to resolve these personnel shortages. Moreover, the Air Force’s program objective memorandum for fiscal year 1998 did not include funding to alleviate them. The Air Force has tasked the Air Combat Command to develop a plan and identify funding requirements to eliminate the shortages using either active or reserve personnel or a combination of both. The numbers in table 3.1 may change somewhat once the Air Combat Command completes a more detailed review of its requirements. The Air Force faces significant challenges in successfully implementing its conventional concept of operations to use bombers in two major regional conflicts. The Air Force has not yet demonstrated that the B-2 can meet some of its most important operational requirements. B-2 testing to date has revealed some problems, and continued testing concurrent with production could result in the delivery of B-2s with limited conventional capabilities or that require additional modification. The B-1B computer and defensive system upgrades have been recently redirected and will not be fully completed until 2006 and 2008, respectively. The Air Force’s planned testing programs for the B-2 and B-1B need to be fully implemented to ensure that operational requirements are met. The Air Force also faces operational challenges in deploying bombers to forward operating locations early in the conflict and sustaining their operations. If the B-1B force cannot achieve and sustain a 75-percent mission capable rate, it will not be able to generate the number of sorties envisioned by the Bomber Roadmap. While the B-1B Operational Readiness Assessment showed that one fully supported wing of B-1Bs can achieve and sustain at least a 75-percent rate, it is still not known whether the entire B-1B force can achieve that rate by 2000. The Air Force has not resolved the bomber personnel shortages in order to meet CINCs requirements for deployed bombers. Also, the bombers may not be able to sustain operations before a resupply system is in place because the Air Force plans to fund 14-day mobility readiness spares packages for the B-2 and B-1B instead of 30-day packages. Bombers that remain in the force will need to be able to deploy and sustain operations at overseas locations to meet CINC requirements. Therefore, we recommend that the Secretary of Defense require the Secretary of the Air Force to (1) provide an assessment of the risk resulting from shortfalls in meeting requirements for mobility readiness spares packages and providing personnel needed to support conventional operations overseas, including the impact of the shortfalls on the Air Force’s ability to meet CINC requirements for bombers and (2) prepare plans and time frames to eliminate these shortfalls or mitigate the risks associated with them. In written comments on a draft of this report, DOD partially concurred with the recommendation that the Secretary of Defense require the Secretary of the Air Force to (1) provide an assessment of the risk resulting from shortfalls in meeting requirements for mobility readiness spares packages and providing personnel needed to support conventional operations, including the impact of the shortfalls on the Air Force’s ability to meet commander in chief requirements for bombers and (2) prepare plans and time frames to eliminate these shortfalls or mitigate the risks associated with them. DOD agreed that there is a shortfall in personnel impacting the Air Force’s ability to meet requirements. The Air Force is evaluating several options to resolve the personnel issue. DOD did not agree that there is a shortfall in the mobility readiness spares packages. DOD noted that, after careful review and analysis, it made a conscious decision to field 14-day versus 30-day packages for the B-1B and B-2. DOD said that the new logistics emphasis on rapid transportation versus large and expensive inventories is consistent with 14-day packages. Also, DOD noted that it incorporated DOD’s strategic logistics initiative in B-1B and B-2 mobility readiness spares package computations. Neither Air Force nor DOD officials provided evidence that the decision was based on logistics initiatives, however. Moreover, DOD’s position is contrary to information we obtained from the Air Combat Command and Air Force headquarters concerning this issue. Officials at both levels expressed concern that the 14-day packages were insufficient to meet requirements and that the decision to fund only the 14-day package was budget driven. DOD’s fiscal year 1997 FYDP includes about $17 billion to operate, sustain, and modernize the planned bomber force for 1996 through 2001. As shown in table 4.1, $6.3 billion, or 37 percent, reflect investment costs, while $10.7 billion (63 percent) reflect amounts planned to operate and support bombers. Spending on operations and support funding is expected to increase significantly after 2001, once the Air Force has established two new squadrons of B-1Bs and has completed the B-2 program. Cost estimates developed by IDA for the 1995 Heavy Bomber Force Study show that the B-1B force will account for the largest portion of future bomber operation and support costs but that the B-2 will be by far the most costly bomber to operate on a per aircraft basis, costing over three times as much as the B-1B and over four times as much as the B-52H. The total cost to modernize DOD’s heavy bomber force is likely to exceed $7 billion by 2008. In addition to spending over $6 billion between fiscal years 1996 and 2001 to modernize the bomber force, the Air Force expects to spend almost $800 million beyond 2001 to complete modifications to the B-1B. Moreover, the Air Force is studying options to upgrade the B-2 force beyond the block 30 configuration which, if approved, would result in additional investment costs beyond those programmed in the fiscal year 1997 FYDP. Operations and support costs included in the fiscal year 1997 FYDP support a smaller number of operational bombers during the initial years, then grow to support a larger force once the Air Force establishes two new B-1B squadrons and additional B-2s enter the inventory. For example, in fiscal year 1996, the fiscal year 1997 FYDP reflects funding for only 60 operational B-1Bs because the Air Force has placed 27 B-1Bs in reconstitution reserve and categorizes the remaining aircraft as test or backup assets. Operations and support costs for 2001 reflect funding for 82 operational B-1Bs. In addition, the Air Force expects to have 16 operational B-2s by 2000 versus 9 B-2s in fiscal year 1996. As more B-2 and B-1B aircraft become operational, costs for personnel, fuel, general and system support, and depot-level maintenance will increase. According to an analysis conducted by IDA as part of the 1995 DOD Heavy Bomber Force Study, annual operations costs for DOD’s planned bomber force will continue to increase beyond 2001, until the planned bomber force reaches its steady state in the year 2007 (when bomber modifications are nearly completed). The Air Force does not have as much experience operating the B-1B and the B-2 as it does operating the B-52. Thus, B-1B and B-2 long-term operations and maintenance costs are somewhat difficult to predict. However, costs to maintain the B-1B and B-2 force, particularly for items such as software maintenance, are expected to increase once these aircraft are upgraded for the conventional role and gain the capability to deliver a wider range of unguided and precision-guided weapons. As part of the 1995 DOD Heavy Bomber Force Study, IDA estimated steady state operations and support costs for each of the bombers. Figure 4.1 compares the average annual operations and support costs for each of the bombers reflected in DOD’s fiscal year 1997 FYDP with IDA’s estimate of annual steady state costs to operate and maintain each of the bombers. The planned bomber program will cost about $337 million more annually than the average annual costs in fiscal year 1997 FYDP, or about $2 billion more over a 6-year period. This represents an increase in costs of 20 percent. As shown in figure 4.1, the total B-1B force will cost more than either the B-52H or the B-2 force to operate and sustain both in the near term and the more distant future. This is because DOD plans to maintain a larger B-1B force compared with the B-52H and the B-2 forces. As shown in figure 4.2, each B-2 is over three times as expensive as a B-1B and over four times as expensive as a B-52H. The total cost to modernize DOD’s bomber force will be at least $7 billion through 2008. The fiscal year 1997 FYDP includes about $6.3 billion to modernize the heavy bomber force. About 95 percent of these funds will be used to upgrade the conventional capabilities of the B-1B and complete the B-2 program. Modifications to the B-52H to enhance its conventional capabilities and improve safety and reliability will cost only about $300 million. DOD plans to spend almost an additional $800 million beyond 2001 to complete the B-1B conventional upgrade. The costs to modernize the B-1B force between fiscal years 1996 and 2008 will exceed $2.8 billion. The Air Force plans to spend about $2.3 billion to improve the B-1B’s conventional capabilities and about $0.5 billion to improve the B-1B’s engine, power system, and flight safety. The estimated B-1B investment cost is shown in table 4.2. The fiscal year 1997 FYDP includes about $4.1 billion in research and development and procurement funds to complete 21 B-2s. The 1994 Defense Authorization Act limited B-2 program acquisition costs to $28.968 billion, expressed in fiscal year 1981 constant dollars. In August 1995, we reported that an Air Force cost estimate indicated the final cost for 20 operational aircraft will be about $28.820 billion in fiscal year 1981 dollars, or about $44.4 billion in then-year dollars. Although the legislative cost cap for the first 20 aircraft no longer applies as a result of language included in the fiscal year 1996 Defense Authorization Act, the Air Force still plans to complete the first 20 B-2s for about $44.4 billion. The Air Force plans to use $493 million in additional B-2 funds made available by the Congress in fiscal year 1996 to convert a test aircraft, known as AV-1, into the 21st operational B-2. The Air Force is studying several options to upgrade the B-2’s capabilities beyond those included in block 30 that could result in additional B-2 investments. In 1994, the Air Force began to explore options for a B-2 Multi-Stage Improvement Program by contracting with the B-2 prime contractor to study potential enhancements to the B-2. The contractor developed four options to improve the B-2’s conventional capabilities and reduce operations and support costs. The Air Force will further assess the options to determine their cost-effectiveness. Also, as part of the 1995 DOD Heavy Bomber Force Study, IDA identified several additional enhancements to the B-2 for DOD to consider. The fiscal year 1997 FYDP does not include funding for any of these options. Over the next decade, DOD plans to spend billions of dollars to operate, sustain, and modernize the bomber force. In constant dollars, the costs to operate and sustain the bomber force will increase as the Air Force funds more bombers for operations and the bomber force reaches a steady state around 2007. While the B-1B will cost more in total operations and support costs on an annual basis than the other bombers because of its larger numbers, the B-2 will be by far the most expensive bomber to operate and sustain on a per aircraft basis, costing over three times as much as the B-1B and over four times as much as the B-52H. On the basis of our analysis of DOD’s requirements for bombers and planned force structure, we identified four options for reducing and restructuring DOD’s bomber force that would achieve cost savings while retaining extensive aggregate airpower capabilities. The first two alternatives—retiring all or a portion of the B-1B fleet—would result in a smaller bomber force than DOD currently plans. Retiring or reducing the B-1B force would not result in a significant decrease in DOD’s existing capabilities given that the B-1B currently lacks an effective defensive avionics system and is capable of delivering few types of conventional weapons. Retiring or reducing the B-1B force after the conventional upgrades are completed would reduce the CINCs’ ability to attack some targets as quickly as desired and would reduce DOD’s long-range capability. However, DOD would retain sufficient airpower capabilities in the aggregate to destroy ground targets associated with two major regional conflicts. The third and fourth options—increasing the number of B-1Bs in the Air National Guard and reducing the number of planned B-1B bases—offer lower cost savings because they do not reduce the number of bombers in the planned force. The options we developed, even those that call for a smaller bomber force, assume that DOD will maintain its planned force of 21 B-2s and 71 B-52Hs. These aircraft will continue to be needed for the nuclear role and therefore appear to be less suitable candidates for retirement or downsizing than the B-1B. Although both DOD and the Congress have considered the need for additional B-2s in recent years, substantial future costs could be avoided if the size of the B-2 force is capped at 21 aircraft as DOD currently plans. Procuring additional B-2s would hinder DOD’s efforts to develop an affordable long-term recapitalization plan unless offsetting cuts in other programs were realized. According to DOD officials, DOD must identify funds for recapitalization if it is to ensure a modern, ready force for the future. For example, many of the tactical aircraft purchased during the defense buildup in the 1980s will reach their projected retirement age over the next 10 or more years. DOD’s tactical aircraft procurement plans call for much greater than expected resources in the outyears than currently planned. By the year 2001, DOD expects procurement funding to increase to $60 billion—over 40 percent higher than the administration’s fiscal year 1997 budget request. This plan assumes that (1) the defense budget top line will stop its decline in fiscal year 1997 and begin to rise again, (2) DOD will achieve significant savings from infrastructure reductions, and (3) DOD will achieve significant savings through acquisition reform. Within the past few years, defense experts have questioned the realism of DOD’s plan for achieving a balanced, modernized force that assumes no further reductions from force levels established by BUR. For example, our analysis of DOD’s planned funding for infrastructure, issued in April 1996, states that DOD will realize no significant net infrastructure savings between fiscal years 1996 and 2001 that can be applied to modernization. Moreover, DOD has not quantified the savings it expects to achieve from acquisition reform. In recent months, DOD’s leadership has recognized that DOD may need to identify other sources of funding from within DOD’s budget for high-priority modernization efforts. Among the options being considered by DOD are reducing infrastructure below levels assumed in DOD’s fiscal year 1997 FYDP, transferring additional missions to the reserve component, and identifying opportunities for eliminating systems that provide redundant capabilities. DOD’s Deep Attack Weapons Mix Study, which will examine the contributions of each of the services’ airpower assets compared with other assets in DOD’s current and projected inventory, is one such effort that may identify opportunities for reducing or eliminating redundant airpower capabilities, according to DOD officials. The four options we developed differ in terms of their potential for achieving cost savings and their effects on DOD’s aggregate airpower capabilities. The Congressional Budget Office estimated the potential budget savings associated with the four options, using DOD’s fiscal year 1996 plan as its baseline. As shown in table 5.1, option one would yield the greatest cost savings; option four the least savings. Options two through four are not mutually exclusive. Various combinations of them would save DOD more money. The first two options would reduce somewhat DOD’s aggregate capability to attack some ground targets and would reduce DOD’s inventory of long-range assets that can attack targets at significant distances without refueling. However, because significant redundancy exists in the services’ ability to destroy ground targets, the United States would still have sufficient airpower capabilities to destroy ground targets associated with two major regional conflicts. The last two options would keep 95 B-1Bs in the force and therefore would have negligible impact on DOD’s conventional capabilities. Because the B-1B will be taken out of the nuclear role in the near future, none of the options will have an effect on DOD’s planned nuclear force, even if START II is not ratified. As discussed in chapter 2, DOD’s principal studies of bomber requirements have significant limitations in their methodology and in some cases include questionable assumptions that may overstate DOD’s need for bombers in conventional conflicts. Moreover, our 1996 review of DOD’s air power capabilities and the Commission on Roles and Missions concluded that DOD appears to have more than ample capability to destroy ground targets. In October 1995, the Chairman of the Joint Chiefs of Staff stated that he will challenge the Joint Requirements Oversight Council to propose innovative recommendations to maintain U.S. war-fighting capability without necessarily maintaining the same number of systems. The Chairman’s report further stated that DOD cannot afford all of the validated requirements in the queue and that tough decisions must be made on which modernization programs to go ahead with and which to cancel so that DOD can develop and implement a long-term, sustainable recapitalization plan. Retiring the B-1B is one option that would somewhat reduce DOD’s aggregate conventional airpower capabilities and result in significant cost savings—about $5.9 billion in budget authority for fiscal years 1997-2001. Eliminating the B-1B force would decrease DOD’s inventory of long-range airpower assets and increase U.S. forces’ dependency on other capabilities and, therefore, the risk that some targets might not be hit as quickly as desired. However, it is plausible to expect that the targets could be hit by other U.S. military assets. B-2s and B-52Hs would still be available for missions requiring long-range and large payload capabilities. Our analysis of Air Force modeling of the air campaign for two major regional conflicts in the 2001-2005 time frame showed there are no unique B-1B targets. Table 5.2 shows that DOD has numerous ways to attack the target the B-1B would strike most frequently during the first 7 days of a conflict. In May 1995, DOD’s Heavy Bomber Force Study concluded that retiring the existing 95 B-1Bs would save $20 billion over 25 years but would not be cost-effective because it would reduce force effectiveness appreciably. However, the DOD Heavy Bomber Force Study focused on comparing the relative cost-effectiveness of alternative bomber forces. It did not attempt to evaluate cost-effectiveness trade-offs between bombers and other force alternatives, such as carrier battle groups or Air Force tactical aircraft. Air Force officials and documents cite several advantages to keeping B-1Bs in the force. For example, near-supersonic airspeed and maneuverability give the B-1 the ability to fly with Air Force fighter aircraft in force packages much like the F-111 did in the Gulf War—but instead of four 2000-pound weapons, the B-1 can carry as many as 24. Another advantage of using bombers in conventional conflicts is that they can be based outside the theater of operations and attack targets at greater ranges than fighter aircraft that require refueling. Retiring the B-1B could increase a CINCs’ need to rely on refueling assets in planning an air campaign. However, DOD plans to improve its refueling capabilities through greater use of multi-point refueling and most likely theaters are small enough that, with available refueling support, all types of aircraft can reach most targets. The loss of long-range capability associated with retiring the B-1B would have the greatest impact in scenarios in which tactical aircraft are assumed to have no access or limited access to bases in theater. However, the United States has agreements with many nations to facilitate access to overseas bases in times of crisis. Another advantage to keeping the B-1B is that it provides mass—the ability to drop large quantities of weapons to achieve widespread destruction and, as evidenced by Desert Storm, with the B-52’s psychological effect. However, even if the B-1Bs were retired, DOD would still have B-52Hs and B-2s available for this purpose in numbers comparable to those used during Desert Storm. Retiring the B-1B would not degrade U.S. military capabilities in mission areas other than ground attack. The B-1B does not have an air-to-air capability in contrast to multi-mission platforms such as F-16s and F/A-18s, which would be assigned many of the same types of targets as B-1Bs during a conventional conflict. In addition, as noted in chapter 3, the B-1B bomber—unlike many other ground-attack assets in DOD’s current inventory—has not yet demonstrated critical capabilities needed to be effective in conventional operations. Retiring the B-1B force also would have no adverse effect on DOD’s nuclear mission. Unlike the B-52H and the B-2, the B-1B will no longer have a nuclear mission once B-2s enter the force. DOD officials stated that even if START II is not ratified and the United States decides to maintain a larger nuclear force than the Nuclear Posture Review recommended, DOD would not reassign B-1Bs a nuclear role. Once the B-1B’s computers are modified so that the B-1B can deliver precision conventional weapons, the B-1B will no longer have the software needed to deliver nuclear weapons. DOD could modify B-1B software and recertify personnel for the nuclear mission. However, this would require at least 18 months and would be very costly, according to DOD officials. Instead, DOD evaluated several other options for maintaining a larger force structure in the event that START II implementation is delayed, such as keeping more TRIDENT submarines than if the treaty is implemented. Retiring the B-1B force would save about $5.9 billion in budget authority and about $5.3 billion in budget outlays for fiscal years 1997-2001. Table 5.3 identifies the annual savings for this option. In estimating the cost savings of this option, the Congressional Budget Office assumed that the B-1B force would be retired over a 1-year period beginning immediately, resulting in smaller savings for fiscal year 1997. The Air Force currently has 27 aircraft in reconstitution reserve that lack aircrews and funding for operations. Beginning in fiscal year 1997, the Air Force will begin to reduce the number of unfunded reconstitution reserve aircraft and will establish two new operational B-1B squadrons by using the aircraft that are currently in reconstitution reserve and funding additional aircrews and flying hours. The Air Force has included the cost of upgrading reconstitution reserve aircraft in the B-1B Conventional Munitions Upgrade Program estimated to cost $2.3 billion from fiscal years 1996 through 2008. If DOD perceives that the risks to retire the entire B-1B fleet outweigh the savings that could be realized, it could choose to retire 27 reconstitution reserve B-1Bs and keep 68 B-1Bs in the force, 60 of which would be funded for combat operations or training. Retiring 27 of DOD’s 95 B-1Bs would mean that DOD would have to accept some decrease in long-range capability and may not be able to strike some of the ground targets DOD planners have identified for two major regional conflicts as quickly as it could with a larger bomber force. However, this option would not result in as much of a loss in capability as retiring the entire B-1B fleet. If 27 B-1Bs were retired, DOD would still have numerous other combinations of platforms and weapons to attack the types of targets that the B-1B is planned to destroy, and DOD would retain the ability to attack ground targets associated with two major regional conflicts. In comparison with retiring all 95 B-1Bs, this option would provide the CINCs with more flexibility in planning air campaigns and basing aircraft in theater, since B-1Bs would be based somewhat farther away from the theater of operations and would not require refueling during a typical wartime mission, unless operating from the United States. This option would also provide some B-1Bs that could fly with tactical aircraft to provide massive firepower during the early phase of an air campaign. Retiring 27 B-1Bs would have no impact on DOD’s ability to fulfill its nuclear mission. Retiring the 27 B-1Bs in reconstitution reserve would save about $450 million in budget authority for fiscal years 1997-2001, according to the Congressional Budget Office. Table 5.4 identifies the annual savings for this option. Recognizing that reconstitution reserve aircraft place an increased maintenance workload on the squadron, the Air Force has authorized and funded four additional maintenance personnel per reconstitution reserve aircraft. Savings in the near term reflect the immediate termination of these positions. Savings increase significantly in 2000 because DOD would not establish two additional operational squadrons and could eliminate the personnel and flying-hour costs associated with these aircraft. Retiring 27 B-1Bs also would save procurement funds since DOD would upgrade only 68 B-1Bs for the conventional mission instead of 95 B-1Bs. However, the Congressional Budget Office did not include these savings in its estimate because the upgrades will occur beyond 2001. Placing more B-1Bs in the Air National Guard is an option that could reduce the cost to maintain DOD’s bomber force while preserving the war-fighting capability of DOD’s planned bomber force. By fiscal year 1998, the Air Force will have 18 B-1Bs fully trained in the conventional role and able to deploy for wartime operations. B-1Bs will no longer have a nuclear role in the near future, thus making the transfer of B-1Bs to the Air National Guard somewhat easier than transferring B-52s to the Air Force Reserve. According to DOD, the Air Force Reserve and Air National Guard have successfully met the challenges of operating fighter, transport, and tanker aircraft and should be able to readily adapt to the bomber mission. Placing 24 more B-1Bs in the Air National Guard would save about $70 million in budget authority for fiscal years 1997 to 2001. We examined placing 24 more B-1Bs in the Air National Guard because it would achieve a 50/50 active/reserve ratio when attrition and backup aircraft are excluded and the Air Force has placed 50 percent or more of some refueling and air mobility assets in the reserve component. Greater cost savings could be achieved by placing a higher percentage of the B-1B force in the Air National Guard. However, active Air Force and Air National Guard officials stated that placing the entire B-1B force in the National Guard would not be advisable because the reserve component relies on active-duty units to develop tactics and provide a pool of trained labor. For example, more than 98 percent of the reserve components’ pilots and over 70 percent of their maintenance specialists have prior active service experience, according to a RAND study on reserves. On the basis of our review of DOD analyses and other studies that have examined the active/reserve mix, we believe that transferring additional B-1Bs to the Air National Guard is not likely to degrade combat effectiveness. In 1993, DOD reported to the Congress that placing B-1Bs in the Air National Guard would result in no loss of war-fighting capability. Moreover, according to RAND, air reserve combat units appear to have readiness similar to active-duty units. For example, during Desert Storm, no post-mobilization validation or significant additional training was required prior to deploying reserve component tactical fighter units. Also, many air reserve units are required to be ready to deploy within the same time as active units based in the continental United States. Air Force officials cited the Air National Guard’s limited experience with the B-1B mission as one of the key reasons the Air Force decided to place only 18 B-1B bombers in the Air National Guard instead of assigning a larger percentage of the force to the Guard. Also, one Air Force official stated that one disadvantage of placing more B-1Bs in the Air National Guard is the risk that presidential call-up of the reserves could be delayed. According to this official, this concern has led CINCs to plan on deploying active combat aircraft units before reserve units, even though reserve units are often required to maintain a capability to mobilize within the same number of days as active units. For example, during Desert Storm, the Air Force met most of its requirements for combat aircraft first with active units, then with reserve units. A major benefit of transferring bombers to the reserve component is that reserve units have traditionally been less expensive to operate than their active duty counterparts. The decision to assign B-1B bombers to the Air National Guard was supported by cost model comparisons and cost-benefit analyses. DOD’s analysis, which was completed in 1993, showed that a B-1B Air National Guard squadron consisting of 10 aircraft would cost less to operate than a comparable active squadron. These savings are attributable to two factors. First, DOD expects that an Air National Guard squadron will require fewer flying hours than an active squadron because Air National Guard units are able to recruit more experienced pilots who require less frequent training to maintain their proficiency. Personnel costs are the second major factor that account for the Air National Guard’s lower cost. In comparison with active squadrons that consist primarily of active military personnel, Air National Guard units rely heavily on less-costly civilians and part-time guard personnel. Placing an additional 24 B-1Bs in the Air National Guard, thereby achieving a 50/50 active/reserve ratio when attrition and backup aircraft are excluded, would result in a cost savings of about $70 million in budget authority for fiscal years 1997-2001, according to the Congressional Budget Office. Table 5.5 identifies the annual savings associated with this option. In developing its estimate, the Congressional Budget Office assumed that one additional Air National Guard unit consisting of eight aircraft would be started in fiscal year 2000 and two additional units would be started in 2001. Savings shown for 2001 would recur annually beyond the years shown. Although there would be some costs associated with starting up new Air National Guard units, these costs could be kept to a minimum if the units are located at the same bases as active duty bomber units, as DOD suggested in its 1993 report to the Congress on transferring bombers to the reserve component. This has occurred at Barksdale Air Force Base in Louisiana where the Air Force has located a B-52H Air Force Reserve squadron alongside active B-52H units. The Air Force plans to move a detachment of six B-1Bs currently located at Ellsworth Air Force Base in South Dakota to Mountain Home Air Force Base in Idaho so that the detachment will be collocated with the 366th Wing, one of the Air Force’s three composite wings. Keeping these six aircraft at Ellsworth would result in no measurable loss of capability and would enable DOD to save about $40 million. Leaving these six B-1Bs at Ellsworth also would eliminate potential difficulties in operating from Mountain Home that could occur over the next few years if the Air Force moves the aircraft as planned before construction of permanent facilities has begun. Force projection composite wings are a significant change from the Air Force’s traditional peacetime basing and wartime employment of aircraft. Traditionally, the Air Force has based one type of aircraft in a wing to achieve economies of specialization. In wartime, the Air Force assembles the needed mix of aircraft as a composite force package en route to a target. By permanently collocating different types of aircraft under one commander, the Air Force intends that force projection composite wings can deploy rapidly and fight autonomously, if necessary. According to the Air Force, moving the B-1Bs to Mountain Home Air Force Base will improve the operational readiness of the 366th Wing by providing more opportunities for B-1B crews to train with other wing assets, including F-15s and F-16s. However, the Air Force has not demonstrated that composite wings provide significant benefits over traditional basing schemes. In 1993, we reported that the Air Force did not conduct sufficient analysis before deciding to build force projection composite wings in the United States and that evidence does not exist that these wings will achieve significant advantages when compared with traditional peacetime basing concepts.The Air Force’s experience in establishing a wartime composite wing at Incirlik Air Base, Turkey, during the Gulf War demonstrated that the advantages attributed to force projection composite wings can be achieved without permanent collocation of aircraft. In addition, the three force projection composite wings the Air Force has established still need to train and deploy with specialized aircraft gained from different bases and commanders. Finally, opportunities for composite training by force projection wings could be limited by competing priorities and range restrictions. The Air Force acknowledges that the Mountain Home Air Force Base training range is incapable of supporting large-scale composite force training. Larger ranges are available in Utah and Nevada that can accommodate these exercises; however, using these ranges requires additional flying time and fuel. The Air Force plans to move the B-1Bs to Mountain Home during fiscal years 1996 and 1997, before funds to construct permanent facilities are approved. The unit will be housed in temporary facilities until permanent facilities are completed several years later. During the intervening years prior to the completion of permanent facilities, the B-1B squadron at Mountain Home will be dependent on maintenance and munitions support from Ellsworth Air Force Base. Turnaround times for replacement or repairs of spare parts could increase due to the need to transport reparables between the two locations. In addition, the unit at Mountain Home Air Force Base will have very limited combat munitions loading capability until sometime after the year 2000 when munitions storage facilities are completed. If tasked with a wartime mission during this period, B-1Bs based at Mountain Home would either deploy to an in-theater forward operating location without munitions or fly to Ellsworth to be loaded with munitions before deploying to theater. The Air Force estimates that temporary and permanent facilities at Mountain Home will cost about $40 million to construct. The Air Force has programmed about $6 million in operations and maintenance funds to provide temporary facilities in fiscal year 1996 and plans to obligate these funds shortly. In addition, the Air Force funded $34 million in the fiscal year 1997 budget for military construction of permanent facilities for maintenance, operations, and housing. It does not expect construction of these facilities to be complete until sometime after the year 2000. Table 5.6 identifies the annual savings for this option. Although funding for additional B-2s is not included in DOD’s plan, DOD and the Congress have considered the need for additional B-2s beyond DOD’s planned force of 21 B-2s in recent years. Proponents of buying additional B-2 bombers perceive that DOD needs more than the 187 bombers it plans to keep in the force because BUR stated that the United States may need 100 bombers for a major regional conflict and DOD may need to swing bombers from one theater to another if a second major regional conflict arose. However, on the basis of the analysis conducted during the 1995 DOD Heavy Bomber Force Study and affordability concerns, DOD determined in May 1995 that it should not procure additional B-2s. In early 1996, the President directed that the issue of more B-2s be reexamined. DOD will examine the potential contribution of B-2s further as part of its Deep Attack Weapons Mix Study, scheduled for completion in early 1997. While our options for retiring or reducing the B-1B force would achieve significant savings, these savings would be eliminated if DOD procured additional B-2s. Substantial future costs could be avoided if the current B-2 force were capped at 21 as DOD currently plans. Moreover, additional B-2 procurements would make it more difficult for DOD to develop and implement a long-term recapitalization plan. In October 1995, the Chairman of the Joint Chiefs of Staff stated that he, along with the CINCs and Joint Chiefs, continues to strongly recommend against congressional action to add additional funding for more B-2s because the military has much higher priorities on which to spend limited procurement dollars. As shown in figure 5.1, life-cycle cost estimates for 20 additional B-2s developed by government agencies, IDA, and Northrop Grumman range from $18.7 billion to $26.8 billion. Our analysis of DOD’s airpower capabilities suggests that DOD may be able to eliminate some of its planned capabilities, rather than carry through with all of the planned upgrades or expand beyond its existing plans by procuring additional systems such as more B-2s. For example, our report on interdiction concluded that DOD has ample capability today to destroy interdiction targets associated with two major regional conflicts and questioned the need for some planned improvements to DOD’s interdiction capability given the amount of redundancy that exists today. Some B-2 advocates also argue that procuring 20 more B-2s will save money because B-2s will be able to penetrate defenses and use low-cost, short-range attack weapons rather than expensive standoff weapons. However, in 1995, the Congressional Budget Office found that additional B-2s would reduce the cost of weapons expended by the bomber force by less than $2 billion during the first 2 weeks of a conflict when the Air Force envisions bombers would make their greatest contribution. This is a small fraction of the $26.8-billion life cycle cost that the Congressional Budget Office projects that an additional 20 B-2s would cost. Within the past few years, several studies sponsored by industry, independent think tanks, and federally funded research and development centers have analyzed the need for more B-2s. Many of the studies that advocate procuring more B-2s assume that the B-2 will be a highly stealthy aircraft that will be able to find mobile targets and react quickly to changes in air defenses. However, as discussed in chapter 3, the B-2 has not yet demonstrated some of its essential mission capabilities, including the extent to which it will be able to evade detection by enemy radar. Moreover, unless upgraded beyond the block 30 configuration, B-2s would have to rely on other sensors to tell them where to look and would have trouble adjusting to rapid changes in threat. Many of these studies also assume that conflicts would happen without warning and, therefore, tactical aircraft will not be available in large numbers. In contrast, DOD’s Heavy Bomber Force Study, which concluded that procuring additional B-2s would not be cost-effective compared with the planned bomber forces, assumed that significant numbers of tactical aircraft would be available at the outset of a conflict, thereby reducing the potential contribution of B-2s. In conducting the Heavy Bomber Force Study, IDA reviewed a number of studies that advocate procuring more B-2s and concluded that the differences in the studies are due primarily to differences in assumptions, particularly those regarding warning time and the availability of tactical aircraft. The assumptions used by IDA are generally consistent with those used in DOD’s BUR, the Defense Planning Guidance, and the Joint Staff’s Nimble Dancer wargame. In addition, DOD has concluded that additional B-2s are not needed to meet future nuclear war-fighting requirements, particularly in view of the nuclear weapons carrying capability limits included in START II. DOD’s Nuclear Posture Review, completed in 1994, concluded that 66 B-52Hs and 20 B-2 bombers would provide sufficient capability for the nuclear leg of the strategic triad, assuming implementation of START I and II agreements by 2003. The START II, once implemented, will limit the U.S. nuclear warhead carrying capability to 3,500 warheads, of which about 1,320 are planned for the bomber force. Even with DOD’s planned force of 21 B-2s and 71 B-52Hs, the Air Force will be required to modify some B-52Hs so that they can carry fewer warheads to stay within the 1,320 limit allocated to the bomber force. More specifically, some B-52H bombers may be modified so that they can carry only 12 nuclear weapons under the wings instead of the maximum of 20 (12 under the wings and 8 inside the bomb bay). If START II is implemented, procuring 20 additional B-2s would require further changes in the B-52H force, which could be achieved either by reducing the size of the force or modifying more B-52Hs so that they can carry fewer weapons. Considering the extensive and improving ground-attack capabilities of U.S. forces, the numerous other options that DOD has to attack most targets that the B-1B is likely to be assigned in future conflicts, and DOD’s awareness that it may need to reduce the number of systems currently planned to ensure a stable, modernized force for the future, we believe that retiring the B-1B force is an option that merits consideration in the context of DOD’s ongoing assessment of its future airpower needs. Retiring the B-1B force would leave DOD with a bomber force of 71 B-52s and 21 B-2s that seems small by Cold War standards. However, DOD’s decision about what forces to keep in the post-Cold War era should be based on keeping the most cost-effective combination of weapon systems needed for a particular mission rather than on a separate examination of requirements for each type of platform in the services’ inventory. When compared with the B-52H and B-2 bombers (which will continue to have a nuclear role in the future) and tactical aircraft that contribute ground-attack capability and air-to-air capability, the B-1B appears to be a logical candidate for retirement. Its role will be limited to adding to DOD’s already formidable ground attack capabilities. For these reasons, it seems questionable that upgrading the B-1B’s capabilities at a cost of about $2.8 billion and spending close to $1 billion per year to maintain the B-1B in the force will have a significant payoff. If DOD were to retire the B-1B force, it would not be necessary to procure additional B-2s to offset the loss of the B-1B’s capabilities. Doing so would only exacerbate DOD’s difficulties in achieving a long-term balance between near-term readiness and recapitalization. If DOD and the Congress determine that the B-1B should not be retired, other options exist for reducing the costs of the bomber force that would preserve much or all of DOD’s current bomber force capabilities. Retiring the 27 B-1Bs currently classified as reconstitution reserve aircraft, placing more B-1Bs in the Air National Guard, or canceling the planned move of six B-1Bs to Mountain Home Air Force Base would result in savings while enabling DOD to preserve the CINCs capability to draw on a wide range of assets in planning wartime operations. In particular, placing more B-1Bs in the Air National Guard would save significant operations and support costs but would have little impact on DOD’s overall bomber capabilities. Moreover, at a time when DOD is seeking to reduce its infrastructure costs, reversing the Air Force’s decision to expand the number of B-1B bases would assist DOD to reduce infrastructure costs by avoiding the need for $40 million in military construction. DOD’s ongoing Deep Attack Weapons Mix Study is designed to determine the most cost-effective mix of systems needed for the deep attack mission. Given the challenges of long-term recapitalization of the force, we recommend that the Secretary of Defense consider options to retire or reduce the B-1B force as part of this study. Regarding the other two B-1B options, GAO recommends that the Secretary of the Air Force assess the potential to place more bombers in the reserve component and reexamine the decision to relocate six B-1B bombers to Mountain Home Air Force Base. In written comments on a draft of this report, DOD partially concurred with one recommendation and did not concur with the other one. DOD partially concurred with our recommendation to include options to retire or reduce the B-1B force in the Deep Attack Weapons Mix Study but disagreed with some of our analysis supporting the recommendation. DOD also stated that it plans to consider a number of force structure options as part of its analysis, including retiring the B-1Bs. DOD stated that we used the Nimble Dancer wargame to support a number of conclusions about bomber effectiveness but that the wargame was never intended to provide specific information about the effectiveness of selected weapons systems across a broad range of scenarios. We agree that the Nimble Dancer wargame was not designed to provide a cost-effectiveness comparison of weapon systems and we did not use it in that manner. We used Air Force modeling of the air campaign for two major regional conflicts, which was provided to the Joint Staff as input to the Nimble Dancer wargame, to show that targets assigned to the B-lB were not unique to the B-1B. Results from the modeling were only one factor we considered in reaching our conclusions. We point out in the report that DOD has numerous and overlapping capabilities to strike ground targets and has not adequately supported its stated requirements for bombers. Given that DOD has stated that it cannot afford all of its planned modernization efforts and that the B-1B will require billions of modernization dollars, we believe that options to retire or reduce the B-1B force should be included in the Deep Attack Weapons Mix Study. DOD also stated the draft report implied that the next generation of precision-guided munitions will be such a large force multiplier that they provide justification for retiring the B-1B now and that there is insufficient evidence to support this assertion. DOD acknowledges, however, that precision munitions are a fundamental enhancement to combat effectiveness. We noted that completion of bomber modifications and fielding of many new precision weapons for use by all attack aircraft should greatly improve bomber and fighter effectiveness potentially reducing the number of bombers and fighters needed to fight two major regional conflicts. The February 1996 Presidential redirection of the Deep Attack Weapons Mix Study also highlights the potential of future precision munitions. The redirection states that part two of the study will focus on the potential that the growing inventory and increasing capabilities of weapons could allow some consolidation of the ships, aircraft, and missiles that will deliver these weapons. It also states that the potential reduction in sorties required for deep attack missions could produce opportunities for appropriate force structure and platform tradeoffs. DOD has recognized that it cannot afford all of the modernization programs currently planned and must make difficult decisions on which programs to terminate or reduce. The Deep Attack Weapons Mix Study should help DOD with these decisions. Inclusion of B-1B options will provide DOD with the opportunity to assess the cost effectiveness of the B-1B prior to committing billions of dollars to upgrade the aircraft. Although DOD written comments state that B-1B options are already included in the Deep Attack Weapons Mix Study, DOD officials stated in an exit conference that the list of options has not been finalized. They also told us that time constraints may limit the number of options that will be considered in the study and therefore some will probably be eliminated. Therefore, we still recommend that the B-1B options be included the study. DOD did not agree with the recommendation that the Secretary of the Air Force assess the potential to place more bombers in the reserve component and reexamine the decision to relocate six B-1Bs to Mountain Home Air Force Base. DOD said that it evaluates the active/reserve mix annually during the budgetary process and believes it has the right bomber mix in place. DOD noted that the majority of the bomber force will most likely be required to strike targets on the first days of a conflict and that the call-up and mobilization requirements for reserves may stress reserve units’ capacity to respond within time constraints. RAND reported in 1993 that the Air Force reserve components train to similar readiness requirements as their active counterparts. Additionally, in responding to the congressional inquiries concerning the initial transfers of bombers to the reserves, the Air Force stated that such transfers would not adversely impact war-fighting capability. DOD already relies heavily on the reserve components to provide time-critical airlift and refueling aircraft. The reserve component operates over 50 percent of some types of these aircraft. Given the potential cost savings that could accrue, we continue to believe that DOD should reassess the potential to place more bombers in the reserve component. With respect to relocating B-1Bs to Mountain Home Air Force Base, DOD stated that the move would eliminate lost training opportunities, additional flying hours, and temporary duty expenses incurred with the bombers stationed at Ellsworth Air Force Base. We still believe that the Air Force should reexamine the decision to move B-1Bs to Mountain Home Air Force Base. We previously reported that DOD has not demonstrated that the benefits associated with the composite wing concept outweigh the additional cost to maintain very small numbers of dissimilar aircraft at the same location compared with the traditional basing concept. Also, for several years after the move, the B-1B unit will be housed in temporary facilities until construction of permanent facilities are completed; remain dependent on maintenance support from Ellsworth Air Force Base; incur additional temporary duty and freight costs to accommodate maintenance; and remain dependent on other locations for wartime bomb loading support in the event deployments are necessary.
Pursuant to a congressional request, GAO assessed the: (1) basis for the Department of Defense's (DOD) bomber force requirements; (2) Air Force's progress in implementing the new conventional concept of operations for using bombers; and (3) costs to keep bombers in the force and enhance their conventional capabilities. GAO found that: (1) DOD based its decision to retain and upgrade 187 bombers on three studies that had significant limitations in their methodology, used questionable assumptions, and failed to examine less costly alternatives; (2) service commanders in chief, who expected to use fewer aircraft than recommended by the three studies, did not express concern that a smaller number of bombers would adversely affect their abilities in future conflicts; (3) the Air Force's bomber modernization program has experienced testing delays, has yet to demonstrate that bombers meet some of the most important mission requirements, and has not fully detailed bomber upgrades; (4) the total cost to modernize DOD's heavy bomber force is likely to exceed $7 billion by 2008; and (5) options that would help DOD to reduce bomber costs while maintaining extensive conventional ground-attack capability include retiring the B-1B force, retiring the 27 B-1B in the reconstitution reserve, placing additional B-1B in the Air National Guard, and consolidating basing for active B-1B.
Under the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act), when state capabilities and resources are overwhelmed and the President of the United States declares an emergency or disaster, the governor of an affected state can request assistance from the federal government for major disasters or emergencies. Additionally, under the Economy Act, a federal agency may request the support of another federal agency, including DOD, without a presidential declaration of a major disaster or an emergency. The federal government’s response to major disasters and emergencies in the United States is guided by the National Response Framework, a national-level guide on how local, state, and federal governments respond to major disasters and emergencies. The DHS interim National Cyber Incident Response Plan outlines domestic cyber-incident response coordination and execution among federal, state and territorial, and local governments, and the private sector. Overall coordination of federal incident-management activities is generally the responsibility of DHS. DOD supports the lead federal agency in the federal response to a major disaster or emergency. When the appropriate DOD official approves a lead federal agency’s request to provide support to civil authorities for domestic disasters or emergencies, DOD may provide capabilities and resources, including those drawn from the National Guard. Through DSCA, DOD provides these capabilities and resources, which the department defines as support provided by U.S. federal military forces, DOD civilians, DOD contract personnel, DOD component assets, and National Guard forces (when the Secretary of Defense, in coordination with the governors of the affected states, elects and requests to use those forces in Title 32 status) in response to requests for assistance from civil authorities for domestic emergencies, law enforcement support, and other domestic activities, or from qualifying entities for special events. The National Guard, which is comprised of Army and Air National Guard units, is located in the 50 states, three U.S. territories, and the District of Columbia. The National Guard has both federal and state-level missions, making it unique among U.S. military organizations. Its federal mission, which is executed under the control of the President of the United States and the Secretary of Defense, includes maintaining well- trained and well-equipped units that are ready to be mobilized and, when mobilized, to execute military missions in support of the full spectrum of DOD missions, including, but not limited to warfighting, contingency operations, defense security cooperation activities, and DSCA during national emergencies, major disasters, insurrections, and civil disturbances. Its state-level mission, which is executed under the control of state and territorial governors or by the President for the District of Columbia—is to protect life and property and preserve peace, order, and public safety. This mission involves providing emergency relief support during local or statewide emergencies, such as riots, earthquakes, floods, or terrorist attacks. National Guard unit personnel may operate in a Title 10 status, a Title 32 status, or a state active-duty status. Personnel in a Title 10 status are federally funded and under the command and control of the President. Personnel in a Title 32 status are federally funded but under the command and control of the governor. National Guard personnel could support DOD’s DSCA mission while in a Title 10 or Title 32 status. The Secretary of Defense, in coordination with respective state governors, determines the most appropriate duty status for National Guard personnel when providing federal support during disasters and emergencies, including cyber support. Separately, National Guard personnel could also support the state’s civil authorities in a state active-duty status. Personnel in a state active-duty status are under the command and control of the governor and are state funded. Under state active duty status, the National Guard can be used for state purposes in accordance with the state constitution and statutes and the respective state is responsible for National Guard expenses. The National Guard Bureau is a joint organization of DOD that, by law, is the channel of communications on all matters pertaining to the National Guard between (a) the Department of the Army and the Air Force, and (b) the states. In addition, according to DOD Directive 5105.77, National Guard Bureau (NGB), the bureau is the focal point at the strategic level for non-federalized National Guard matters that are not the responsibility in law or DOD policy of the Secretary of the Army, the Secretary of the Air Force, or the Chairman of the Joint Chiefs of Staff. The directive also states that the bureau supports force employment matters pertaining to homeland defense and DSCA missions by advising the Chairman of the Joint Chiefs of Staff on the activities of the National Guard as they relate to those missions. Specifically, according to the directive, the bureau prescribes training requirements; plans, programs, and administers the budget; and implements guidance on the structure of the Army National Guard of the United States and the Air National Guard of the United States. In its 2014 cyber mission analysis report, DOD reported that the National Guard is well-positioned to offer its expertise and support to states in traditional missions like natural disasters as well as less traditional missions in cyberspace. Further, the Chief of the National Guard Bureau in his 2017 National Guard Bureau Posture Statement reported that the National Guard is uniquely postured to provide cyber capabilities and that its cyber capacity will play an integral role in coordinating with state and federal cyber professionals. In May 2016, DOD issued a Deputy Secretary of Defense policy memorandum that provides guidance on (a) coordinating, training, advising, and assisting cybersecurity support and services that DOD—including National Guard units—could provide to civil authorities incidental to military training, and (b) a state’s use of DOD networks, hardware, and software for state cybersecurity activities. Exercises are training events that, according to the 2008 National Response Framework, can play an instrumental role in preparing organizations to respond to an incident by providing opportunities to test response plans, evaluate response capabilities, assess the clarity of established roles and responsibilities, and improve proficiency in a simulated, risk-free environment. Short of performance in actual operations, exercises provide the best means to assess the effectiveness of organizations in achieving mission preparedness. Exercises provide an ideal opportunity to collect, develop, implement, and disseminate lessons learned and to verify corrective action taken to resolve previously identified issues. Sharing positive experiences reinforces positive behaviors, doctrine, tactics, techniques, and procedures, while disseminating negative experiences highlights potential challenges in unique situations or environments or identifies issues that need to be resolved. According to the 2008 National Response Framework, well- designed exercises improve interagency coordination and communications, highlight capability gaps, and identify opportunities for improvement. There are various types of exercises ranging from tabletop exercises that involve key personnel discussing simulated scenarios in informal settings to full-scale response exercises that include many agencies, jurisdictions, and disciplines. In addition to different types of exercises, there are different complexities or focus areas for exercises, such as tiers of exercises identified by numbers 1 through 4. For example, DOD units are to conduct tier 4 training to focus on unit policy and joint and service doctrine linked to unit mission-essential tasks. However, for more complex training situations, DOD is to conduct tier 1 exercises that are designed to prepare national-level organizations and combatant commanders and staffs at the strategic and operational level to integrate interagency, non-governmental, and multinational partners in highly complex environments. Also, the goal of tier 1 exercises is to integrate a diverse audience in a joint training environment and identify core competencies, procedural disconnects, and common ground to achieve U.S. unity of effort. The National Guard in the 50 states, three territories, and the District of Columbia have capabilities that could be used—if requested and approved—to perform DOD or state missions to support civil authorities in a cyber incident. National Guard cyber capabilities, according to DOD officials, vary among states, territories, and the District of Columbia based on their differences in funding and prioritization. National Guard officials told us that National Guard units are in a unique position to recruit and retain individuals who have significant cyber expertise based on their full- time positions outside of the military and can coordinate with state authorities and critical infrastructure owners within their respective states. Based on our review of DOD reports, National Guard guidance documents, and our interviews with National Guard officials, we found three types of cyber capabilities that exist within the National Guard: Communications directorates: The National Guard has a communications directorate within each state, territory, and the District of Columbia that operates and maintains that state’s part of the National Guard information network called GuardNet. In this capacity, the directorate conducts information assurance, information operations, and internal defensive activities. The size of each National Guard unit’s communications directorate varies between state, territory, and federal district. For example, Nevada National Guard officials told us that there were 26 full-time Army National Guard personnel staffed to their communications directorate in fiscal year 2016. Also, Maryland National Guard officials told us that there were 30 full-time personnel staffed to their Army National Guard communications directorate in fiscal year 2016. According to National Guard officials, personnel who work within a communications directorate, if requested and approved, could support a DSCA mission in a cyber incident. For example, Washington National Guard officials told us that their communications directorate’s cyber personnel can conduct vulnerability assessments, support cyber recovery efforts, provide cyber incident response support, and provide cyber and communication capabilities during cyber-related emergencies. Further, National Guard officials told us that the Georgia, Washington, and Maryland National Guard units have developed partnerships with state agencies and local governments to provide cybersecurity support. Computer network defense teams: The National Guard has computer network defense teams in each state, three territories, and the District of Columbia with a mission to protect National Guard information systems against cyber threats within the respective state, territory, or federal district. According to the 2015 Concept of Operations Army National Guard Computer Network Defense Team (CND-T), the teams could serve as first responders for states for cyber emergencies and may provide surge capacity to national capabilities. For example, Colorado National Guard officials told us that their computer network defense team—if requested—could provide cyber capabilities to support civil authorities. In preparation for such a request, the team developed a planning document that identified specific cyber capabilities—such as cyber analysis, threat assessment, and incident response—that the team could provide to civil authorities for a cyber-related emergency or incident. Georgia National Guard officials also told us that their computer network defense team’s primary mission is to provide direct cybersecurity support to the network enterprise center and the team also conducts cybersecurity assessments, incident response, network analysis, and forensic support. As of October 2015, 50 states, three territories, and the District of Columbia had computer network defense teams that ranged from 1 to 23 personnel. National Guard cyber units: The Army and Air Force are in the process of setting up National Guard units with cyber capabilities to support U.S. Cyber Command’s missions. For example, the Army and the Air Force are planning to establish National Guard cyber protection teams to conduct defensive cyberspace operations. National Guard officials stated that while the Army National Guard and Air National Guard are approaching the organization of the teams differently, both sets of teams will have capabilities that could support civil authorities in a domestic cyber incident. Specifically, within the Army National Guard, the Army has 1 full-time cyber protection team in place and is developing 10 part- time cyber protection teams that would conduct defensive cyberspace operations, and could support DSCA missions if called upon. Also, within the Air National Guard, the Air Force plans to develop 2 full-time cyber protection teams that will be filled by 12 Air National Guard units on a rotational basis that would support U.S. Cyber Command to defend DOD networks and would be available as surge capacity in a cyber incident. According to the 2017 National Guard Bureau Posture Statement, the National Guard will activate the cyber teams by the end of fiscal year 2019. According to National Guard officials, the cyber protection teams are authorized to have 39 personnel each and Georgia National Guard officials told us that in fiscal year 2016 their cyber protection team had 34 assigned personnel. In addition to the cyber protection teams, the Air National Guard has 3 additional cyber units whose mission—as members of U.S. Cyber Command’s national mission teams—is to stop cyber attacks and malicious cyber activity of significant consequence against the United States. The Air National Guard also has 7 cyber intelligence, surveillance, and reconnaissance units whose mission is to produce tailored all-source intelligence products that enable cyberspace operations. In addition, according to DOD’s cyber mission analysis report, the Virginia National Guard Data Processing Unit, when activated, conducts cyberspace operations in support of U.S. Cyber Command and other organizations. DOD has not identified and does not have full visibility into National Guard cyber capabilities that could support civil authorities during a cyber incident. As noted in DOD’s 2013 Strategy for Homeland Defense and Defense Support of Civil Authorities, DOD is often expected to play a prominent supporting role in responding to a disaster and to rapidly and effectively harness resources to respond to civil-support requests in the homeland. According to the strategy, an effective response will require, among other things, better linking of established federal and state capabilities. DOD does not have visibility into all National Guard units’ cyber capabilities because the department has not maintained a database that identifies National Guard cyber capabilities that could support civil authorities during a cyber incident. Section 1406 of the John Warner National Defense Authorization Act for Fiscal Year 2007 requires that DOD identify National Guard emergency response capabilities. Specifically, the section requires that the Secretary of Defense maintain a database of emergency response capabilities that includes the following: (1) the types of emergency response capabilities that each state’s National Guard, as reported by the states, may be able to provide in response to a domestic natural or manmade disaster, both to their home states and under state-to-state mutual assistance agreements; and (2) the types of emergency response capabilities that DOD may be able to provide in support of the National Response Plan’s emergency support functions, and identification of the units that provide these capabilities. Initially during our review, National Guard Bureau officials identified two systems that the bureau traditionally uses to identify some National Guard capabilities—the Defense Readiness Reporting System and the Joint Information Exchange Environment. However, National Guard officials acknowledged that neither of these systems fully or quickly identified National Guard cyber capabilities that could be used to support civil authorities in a cyber incident. For example, according to National Guard Bureau officials, the Defense Readiness Reporting System was designed to identify the capabilities associated with National Guard units’ federal missions; however, since some National Guard capabilities, such as computer network defense teams, were established to support state and local governments and do not have a federal mission, the Defense Readiness Reporting System will not report or identify these capabilities. Additionally, National Guard Bureau officials told us that they have used the Joint Information Exchange Environment system to query National Guard units for specific capabilities; however, the officials acknowledged that the query approach takes time that might not be available during a cyber incident. National Guard Bureau officials also told us that these systems were not designed to identify National Guard unit cyber capabilities and that neither of the systems were established or designed for the purposes described in section 1406. DOD officials, including National Guard Bureau officials and two state National Guard units we interviewed, acknowledged that DOD has not maintained a database that would allow the department to fully and quickly identify existing cyber capabilities of all National Guard cyber units. Without such a database, DOD may not have timely access to these capabilities when requested by civil authorities during a cyber incident. From fiscal years 2013 through 2015, DOD conducted or participated in 9 exercises that were designed to explore the application of policies for supporting civil authorities or to test the response to simulated attacks on cyber infrastructure owned by civil authorities. Of these 9 exercises, DOD conducted 7 exercises and participated in 2 non-DOD hosted exercises. Table 1 shows the 7 exercises that DOD components conducted during the time period of our review—fiscal years 2013 through 2015. The exercises explored how the department would provide assistance to civil authorities during or after a cyber incident. For example, U.S. Cyber Command’s Cyber Guard—The command conducted exercises in fiscal years 2013, 2014, and 2015 to explore the ability of DOD, other federal agencies, and the private sector to respond in cyberspace to a destructive or disruptive attack of significant consequence on U.S. critical infrastructure. In the 2015 Cyber Guard exercise, DOD participants supported DHS network defense as part of a simulated DSCA response. National Guard teams also conducted activities to coordinate, train, advise, and assist civil authorities in a state active duty status. The exercises also included legal and policy tabletop review sessions to explore legal and policy issues related to a national response to significant domestic cyberspace incidents. Army National Guard’s Cyber Shield—The Army National Guard conducted Cyber Shield exercises in fiscal years 2013, 2014, and 2015 to train computer network defense teams on the detection, analysis, identification, reporting, and mitigation of cyber threats. The Army National Guard focused the fiscal year 2013 and 2014 exercises on defense of the GuardNet, and in the fiscal year 2015 exercise, changed the focus to support for civil authorities. Specifically, the Cyber Shield 2015 exercise included a scenario where industrial-control systems for electric grid infrastructure and hydroelectric dams were under cyber threat. According to Army National Guard officials, this change to focus on civil support was in response to requests from states for an exercise that would involve National Guard support for information technology infrastructure in the states. Vista Host II—In this May 2015 tabletop exercise, North American Aerospace Defense Command and U.S. Northern Command focused on examining planning assumptions, potential resource requirements, and the roles and responsibilities for cyber-related defense support of civil authorities. Specifically, the exercise scenario involved civil support to an electric power generator in responding to a disaster caused by a cyber attack on the generator’s industrial-control systems that controlled hydroelectric and nuclear power generation systems. According to U.S. Northern Command officials, the exercise showed that there was a lack of clarity on roles and responsibilities for supporting civil authorities during a cyber incident. In addition to conducting the 7 exercises, from fiscal year 2013 through 2015, DOD components participated in 2 exercises conducted by non- DOD organizations. DHS’s Cyber Storm IV—The department’s Cyber Storm IV exercises, which consisted of a series of 15 exercises focused on cybersecurity preparedness and response capabilities, ran from fiscal year 2011 through fiscal year 2014. The exercise series was designed to, among other things, improve the processes, procedures, interactions, and information-sharing mechanisms that exist or should exist under the interim National Cyber Incident Response Plan. According to DHS officials involved in planning the exercise series, DOD officials assisted in designing the exercise scenarios and DOD officials also participated in multiple exercises that included a tabletop exercise component designed to examine policy issues. U.S. Cyber Command officials also noted that Cyber Storm IV helped participants better understand federal cyber capabilities. North American Electric Reliability Corporation’s GridEx II—The not-for- profit international organization conducted the GridEx II exercise in fiscal year 2014 on responding to cyber attacks on electric grid components. The exercise included both executive decision making in a tabletop exercise and a response to simulated cyber attacks on electric grid networks. According to a North American Electric Reliability Corporation official, DOD components such as U.S. Northern Command, U.S. Cyber Command, and two state National Guard units participated in GridEx II. We identified three types of challenges with DOD’s exercises that could limit the extent to which DOD is prepared to support civil authorities in a cyber incident; DOD has not addressed the challenges. The DOD Cyber Strategy states that DOD will exercise its DSCA capabilities in support of DHS and other agencies and with state and local authorities to help defend the federal government and the private sector, if directed, in an emergency. Similarly, the Strategy for Homeland Defense and Defense Support of Civil Authorities states that DOD will deepen and facilitate rigorous federal, regional, and state-level planning, training, and exercises through coordination and liaison arrangements that support civil authorities at all levels. Although DOD has developed the two guidance documents, we found challenges that could limit the effectiveness of DOD’s exercises. Specifically, Limited access because of classified exercise environments: According to documents we reviewed and officials we interviewed, DOD’s tendency to exercise in a classified environment limited the ability of other federal agencies and critical infrastructure owners to participate in DSCA exercises. In one example, Washington National Guard officials told us that utility personnel who had flown across the country to participate in a civil support exercise that the National Guard unit had invited them to participate in were not admitted into the classified exercise environment. According to DHS’s Cyber Guard 15 after-action report, DOD’s requirement for the exercise environment to be closed and classified prohibited a more active participation by industry partners and DHS components, including the National Cybersecurity and Communications Integration Center. According to the DHS Cyber Guard 15 exercise after-action report, the exercise has experienced this challenge since 2013. Similarly, according to U.S. Cyber Command’s after-action report for its February 2016 Cyber Guard 16 tabletop exercise, the exercise experienced issues because officials of state and local governments and the private sector did not have security clearances, which hindered information sharing. Limited inclusion of other federal agencies and critical infrastructure owners: Some of the exercises DOD conducted included key federal agencies such as DHS and critical infrastructure owners such as power providers. However, the exercises DOD conducted did not include other key federal agencies (e.g., State and Treasury departments), or other critical infrastructure owners (e.g., bank owners). According to an official from ODASD for Cyber Policy, DOD recognizes that such organizations potentially would be involved in a cyber incident. Similarly, according to the DOD Cyber Strategy, the private sector owns and operates over 90 percent of all of the networks and infrastructure of cyberspace and is thus the first line of defense. In Vista Host II, DOD officials reportedly learned that the critical infrastructure owner would contact its security vendors first because of their familiarity with the critical infrastructure’s industrial-control systems; however, none of the DOD exercises we reviewed included such vendors. Inadequate incorporation of joint physical-cyber scenarios: The 7 DOD-conducted exercises we reviewed did not fully explore a scenario in which multiple DOD components and commanders would be responding to a cyber incident that causes an emergency or disaster with physical effects or occurs during such an emergency. The Joint Action Plan for State-Federal Unity of Effort on Cybersecurity, which was approved by DOD, recognizes the possibility of a cyber incident with physical effects as well as a physical incident with cyber implications. DOD recognizes that a cyber incident could cause physical effects, including cascading failures of multiple, interdependent, critical, life-sustaining infrastructure sectors. Similarly, Washington National Guard officials told us that bad actors may take advantage of a disaster or emergency to conduct cyber attacks on information and communications systems in that geographic area. In its planning, DOD has recognized that this is an area that needs to be addressed. Specifically, a planning document that the ODASD for Cyber Policy, the National Guard Bureau, U.S. Northern Command, and U.S. Cyber Command developed to implement the DOD Cyber Strategy states that the department should conduct an exercise that will incorporate cybersecurity as part of broader exercise scenarios. DOD officials acknowledged that DOD exercises to date, such as the Cyber Guard exercises, have not been ideal for a nationwide exercise that addresses multiple complexities of cyber incidents and physical consequences. In addition to these challenges, we also observed that DOD has not addressed its goals by conducting a tier 1 exercise involving various partners in highly complex environments. Specifically, while DOD conducted 7 exercises that evaluated in some part civil support for a cyber incident, these exercises varied from tabletop exercises to other exercises that do not meet Joint Staff’s’ tier 1 exercise criteria. DOD’s Cyber Strategy exercise planning document states that DOD needs to conduct a tier 1 exercise to achieve the DOD Cyber Strategy goal of exercising its DSCA capabilities in support of DHS and other agencies, including state and local authorities, to help defend the federal government and the private sector, if directed, in an emergency. Similarly, U.S. Northern Command and ODASD for Cyber Policy officials told us that the department needs to conduct a tier 1 exercise to explore a disaster with physical and cyber effects. DOD’s Cyber Strategy planning document states and officials agree that the department needs to conduct such an exercise to prepare its forces to support civil authorities during or after a cyber incident. However, DOD has not conducted a tier 1 exercise that would prepare DOD forces and enable the department to achieve one of the goals in the DOD Cyber Strategy because the department has not identified an exercise to do so. According to U.S. Northern Command officials, the command wanted to incorporate a cyber civil-support scenario in its 2016 Ardent Sentry exercise, which is a tier 1 exercise. However, the command cancelled its plans after U.S. Cyber Command— a DOD component that would potentially provide critical capabilities in supporting civil authorities—stated that the command had to focus its exercise resources on the Cyber Guard exercise to certify DOD’s cyber protection teams. Until DOD identifies and conducts a tier 1 exercise, DOD will miss an opportunity to fully test response plans, evaluate response capabilities, assess the clarity of established roles and responsibilities, and improve proficiency in supporting DHS, other federal agencies, and state and local authorities, if directed, in an emergency. In addition, identifying and conducting a tier 1 exercise would provide DOD an opportunity to address the challenges the department has experienced in previous exercises. For example, the tier 1 exercise could be conducted in part on an open network, include additional federal agencies and other critical infrastructure owners that would be involved in a response, and incorporate scenarios where both cyber threats and physical effects were involved. DOD has a key role to prepare to defend the homeland and support civil authorities in all domains—including cyberspace—and plays a crucial role in supporting a national effort to confront cyber threats to critical infrastructure. The National Guard has cyber capabilities that could be used—if requested and approved—to support civil authorities in a cyber incident. During an emergency, it is necessary for decision makers to have visibility into the full capabilities that National Guard units possess to support civil authorities. Unless DOD develops or specifies a database to provide full and quick identification of all National Guard units’ cyber capabilities, DOD may not have timely visibility and access for needed capabilities when requested by civil authorities during a cyber incident. Similarly, DOD has conducted and participated in exercises to prepare the department to support civil authorities in a cyber incident. Unless DOD conducts a tier 1 exercise that involves various partners in highly complex environments, DOD risks having unprepared forces to call upon to support civil authorities during or after a disaster with physical and cyber effects, and will miss a key opportunity to address the challenges we have identified with its previous exercises. To ensure that decision makers have immediate visibility into all capabilities of the National Guard that could support civil authorities in a cyber incident, we recommend that the Secretary of Defense maintain a database that can fully and quickly identify the cyber capabilities that the National Guard in the 50 states, three territories, and the District of Columbia have and could be used—if requested and approved—to support civil authorities in a cyber incident. To better prepare DOD to support civil authorities in a cyber incident, we recommend that the Secretary of Defense direct the Deputy Assistant Secretary of Defense for Cyber Policy, the Chief of the National Guard Bureau, the Commander of U.S. Northern Command, and the Commander of U.S. Cyber Command to conduct a tier 1 exercise that will improve DOD’s planning efforts to support civil authorities in a cyber incident. Such an exercise should also address challenges from prior exercises, such as limited participant access to exercise environment, inclusion of other federal agencies and private-sector cybersecurity vendors, and incorporation of emergency or disaster scenarios concurrent to cyber incidents. We provided a draft of this report to DOD and DHS for their review and comment. In its written comments, DOD partially concurred with our two recommendations. DOD’s comments are summarized below and are reprinted in entirety in appendix II. DOD also provided technical comments, which we incorporated into the report as appropriate. DHS provided technical comments which we incorporated as appropriate. DOD partially concurred with our recommendation that the Secretary of Defense maintain a database that can fully and quickly identify the cyber capabilities that the National Guard in the 50 states, three territories, and the District of Columbia have and could be used—if requested and approved—to support civil authorities in a cyber incident. In its response, DOD stated that it already tracks capability and readiness across the entire force. Specifically, DOD stated that National Guard units assigned to and performing Title 10, U.S. Code, missions report readiness through the Defense Readiness Reporting System, and that units assigned to perform Title 32, U.S. Code, missions report to their state’s adjutant general. However, as we reported—and DOD’s comments reflect—the Defense Readiness Reporting System does not identify National Guard capabilities that could provide cyber support in a cyber incident. While this system could track some National Guard capabilities, such as cyber protection teams assigned to U.S. Cyber Command, this system alone will not provide DOD leaders complete information about capabilities they could employ to assist civil authorities. For example, while National Guard computer network defense teams could serve as first responders for states for cyber emergencies and may provide surge capacity to national capabilities, the readiness system will not include these teams. In its comments, DOD also made reference to an annual report that state adjutants general are to provide to the Chief of the National Guard Bureau regarding the readiness of their respective state National Guards. During our engagement, we reviewed the National Guard Bureau’s submission to the July-September 2015 Quarterly Readiness Report to the Congress, which the bureau uses to meet its requirement to provide DOD leaders a status on the readiness of the National Guard to conduct DSCA activities. We found that the report identifies the readiness of state National Guard units to conduct certain DSCA missions—such as hurricane response. However, the National Guard has not incorporated other DSCA missions—including cyber civil support—in the Quarterly Readiness Report to the Congress. Consequently, as prepared now, this report does not help DOD leaders identify assets that could be used in a cyber crisis scenario. However, if the National Guard Bureau modifies the report to include the readiness level of National Guard units to provide civil support in a cyber incident, DOD leaders will potentially have more visibility into cyber capabilities that exist within the National Guard across each state. Because the Defense Readiness Reporting System and the National Guard report do not currently enable DOD leaders to identify National Guard cyber capabilities that could facilitate a quick response in a cyber incident, we continue to believe that DOD should maintain a database—as required by law—that can fully and quickly identify the cyber capabilities that the National Guard possesses. In response to DOD’s comments, we clarified the recommendation that was initially in the report. Specifically, we modified the recommendation from stating that the database should include cyber capabilities that “all National Guard units possess” to cyber capabilities that “the National Guard in the 50 states, three territories, and the District of Columbia have and could be used.” This modification is consistent with the requirement identified in Section 1406 of the John Warner National Defense Authorization Act for Fiscal Year 2007, which states that the database should include emergency response capabilities that each state’s National Guard may be able to provide in response to a natural or manmade domestic disaster. We discussed this modification with DOD officials and they agreed that the modified recommendation provided them the necessary flexibility to address the report’s finding and recommendation. DOD partially concurred with our recommendation that the Secretary of Defense direct the Deputy Assistant Secretary of Defense for Cyber Policy, the Chief of the National Guard Bureau, the Commander of U.S. Northern Command, and the Commander of U.S. Cyber Command to conduct a tier 1 exercise that will improve DOD’s planning efforts to support civil authorities in a cyber incident. DOD concurred in the need to exercise a whole range of challenges associated with responding to a cyber incident but stated that it believes that the Cyber Guard exercise meets the intent of the recommendation. DOD stated that Cyber Guard is designed to address a whole-of-government, whole-of-nation response to a significant cyber attack and included participants from across DOD, the National Guard, DHS, the Federal Bureau of Investigation, the intelligence community, and the private sector. Based on our review of after-action reports and discussions with DOD officials, we believe that the Cyber Guard exercise provides DOD components with an opportunity to evaluate aspects of the department’s DSCA mission—such as Cyber Guard 15’s test of DOD participation in a response to a cyber attack of significant consequence against U.S. critical infrastructure. However, these after-action reports and DOD officials at various levels also identified a number of issues that keep Cyber Guard in its current form from being a tier 1 exercise that would enable the department to achieve its DOD Cyber Strategy goal of exercising its DSCA capabilities in support of DHS and other agencies, including state and local authorities. Specifically, officials from the ODASD for Cyber Policy, U.S. Northern Command, U.S. Cyber Command, and National Guard units told us that Cyber Guard, in its current form, does not meet the intentions of a tier 1 exercise. For example, according to DOD officials, one of the primary purposes of Cyber Guard is to use the exercise as a forum to certify cyber protection teams as being operationally ready. Consequently, according to DOD officials, this does not provide DOD flexibility to address training requirements that are not part of the certification requirements. DOD has also conducted the exercise in a classified forum, which consistently limits public and private sector participation. DOD stated that it strives for greater inclusion of public and private entities in its exercises to increase realism and enhance its understanding of domestic response requirements; however, the exercises are typically classified because they can reveal capabilities, readiness, or plans for military forces that must be protected. DOD’s approach does not recognize that while some DOD components may support civil authorities using classified means, other DOD components—including the National Guard—may be coordinating, training, advising, or assisting civil authorities on unclassified networks. Other cyber civil support exercises, such as the Army National Guard’s Cyber Shield exercise and the North American Electric Reliability Corporation’s GridEx exercise, demonstrate that training in unclassified forums is both possible and beneficial. If DOD modifies Cyber Guard to address the challenges we have highlighted—such as limited participant access to exercise environment, inclusion of other federal agencies and private-sector cybersecurity vendors, and incorporation of emergency or disaster scenarios concurrent to cyber incidents—it could improve DOD’s planning efforts to support civil authorities in a cyber incident. Otherwise, we still believe that DOD should conduct a tier 1 exercise such as a modified Ardent Sentry that includes a DOD response to civil authorities for a cyber incident. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9971 or KirschbaumJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To examine the extent to which the National Guard has developed cyber capabilities that could support civil authorities in response to a cyber incident and the Department of Defense (DOD) has visibility over those capabilities, we reviewed DOD policies and guidance to identify the National Guard’s role in providing Defense Support of Civil Authorities, National Guard cyber capabilities, and the mechanisms used to identify National Guard capabilities. Specifically, we reviewed Joint Publication 3- 28, Defense Support of Civil Authorities; DOD Directive 3025.18, Defense Support of Civil Authorities (DSCA); DOD Instruction 3025.22, The Use of the National Guard for Defense Support of Civil Authorities; and DOD Directive 7730.65 Department of Defense Readiness Reporting System (DRRS). To identify National Guard cyber capabilities, we reviewed DOD’s and the National Guard’s cyber mission analysis reports. Additionally, we discussed National Guard unit cyber capabilities and capability identification mechanisms with officials from DOD involved in DSCA from the National Guard Bureau, the Army National Guard, the Air National Guard, and the Office of the Deputy Assistant Secretary of Defense (ODASD) for Cyber Policy. We also spoke with Timothy Lowenberg, a recognized expert on the National Guard’s role in cyber incidents who is a retired U.S. Air Force Major General; he also has served as an advisor to the Council of Governors and the National Governors Association. We compared the DOD guidance documents listed above and the information we received in our interviews to the requirement for identifying National Guard emergency response capabilities listed in the United States Code. Based on these discussions and relevant DOD documentation, we categorized National Guard units with cyber capabilities. After pre-testing our interview questions with officials from the Maryland National Guard and meeting with the Colorado National Guard, we conducted structured interviews with a non- generalizable sample of officials from state National Guard cyber units from Georgia, Nevada, and Washington to discuss their cyber civil- support roles and responsibilities, cyber capabilities, and capability tracking mechanisms. We judgmentally selected these states based on the type and number of cybersecurity teams in the state, participation of teams in cyber civil-support exercises, and the relative level of information sector employment in the state based on 2014 Bureau of Labor Statistics sector-level data. We found the Bureau of Labor Statistics information- sector activity data sufficiently reliable for the purpose of this selection. Our findings regarding the capabilities identified during our three sets of interviews with these National Guard units are not generalizable to all state National Guard cyber units and do not reflect an exhaustive list of National Guard cyber capabilities. While some of the National Guard capabilities could be used to support their respective state missions, our focus was on National Guard capabilities that could be used in DOD’s DSCA mission. To assess the extent to which DOD has conducted and participated in exercises to support civil authorities in cyber incidents and any challenges it faced in doing so, we reviewed the DOD Cyber Strategy, Strategy for Homeland Defense and Defense Support of Civil Authorities, and Joint Publication 3-28 for DSCA. We also reviewed these documents to determine the types of exercises in which DOD should be conducting or participating. We identified a non-generalizable sample of relevant exercises by reviewing exercise planning documentation and through interviews with knowledgeable officials. Specifically, we reviewed an exercise planning document that DOD developed in response to the DOD Cyber Strategy and interviewed DOD and DHS officials to identify exercises that DOD components conducted or participated in from fiscal years 2013 through 2015. We chose this timeframe because it allowed us to identify a range of exercises for review and to identify any trends over time. We selected exercises—to include tabletop or simulated network defense exercises—that addressed computer network defense and involved support to civil authorities. We excluded exercises that focused solely on defense of DOD networks. We confirmed these exercises met our selection criteria through reviewing exercise after-action reports. To examine DOD planning for conducting future exercises related to civil support for cyber incidents, we reviewed the DOD Cyber Strategy exercise planning document. We also reviewed DOD guidance for such exercises, such as the DOD Cyber Strategy; Joint Publication 3-28; DOD Directive 3025.18, and Chairman of the Joint Chiefs of Staff Instruction 3500.01H Joint Training Policy for the Armed Forces of the United States. We compared DOD plans for exercises for supporting civil authorities in cyber incidents to these documents. We observed the Cyber Guard 2016 Legal/Policy Tabletop Exercise held in February 2016 in Laurel, Maryland. To learn about DOD challenges in conducting exercises and planning for exercises of civil support in cyber incidents over the next few years, we interviewed officials from ODASD for Cyber Policy, National Guard Bureau, U.S. Northern Command, and U.S. Cyber Command. We conducted this performance audit from June 2015 to September 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact above, key contributors to this report included Tommy Baril (Assistant Director), Tracy Barnes, David Beardwood, Kevin Copping, Patricia Farrell Donahue, Jamilah Moon, and Richard Powelson. Civil Support: DOD Needs to Clarify Its Roles and Responsibilities for Defense Support of Civil Authorities during Cyber Incidents. GAO-16-332. Washington, D.C.: April 4, 2016. Civil Support: DOD is Taking Action to Strengthen Support of Civil Authorities. GAO-15-686T. Washington, D.C.: June 10, 2015. Cybersecurity: Actions Needed to Address Challenges Facing Federal Systems. GAO-15-573T. Washington, D.C.: April 22, 2015. Information Security: Agencies Need to Improve Cyber Incident Response Practices. GAO-14-354. Washington, D.C.: April 30, 2014. Information Security: Federal Agencies Need to Enhance Responses to Data Breaches. GAO-14-487T. Washington, D.C.: April 2, 2014. Civil Support: Actions Are Needed to Improve DOD’s Planning for a Complex Catastrophe, GAO-13-763. Washington, D.C.: September 30, 2013. Homeland Defense: DOD Needs to Address Gaps in Homeland Defense and Civil Support Guidance. GAO-13-128. Washington, D.C.: October 24, 2012. Defense Cyber Efforts: Management Improvements Needed to Enhance Programs Protecting the Defense Industrial Base from Cyber Threats. GAO-12-762SU. Washington, D.C.: August 3, 2012. Information Security: Cyber Threats Facilitate Ability to Commit Economic Espionage. GAO-12-876T. Washington, D.C.: June 28, 2012. Cybersecurity: Threats Impacting the Nation. GAO-12-666T. Washington, D.C.: April 24, 2012.
The DOD 2015 Cyber Strategy reported that a cyber attack could present a significant risk to U.S. national security. House Report 114-102 included a provision that GAO assess DOD's plans for providing support to civil authorities for a domestic cyber incident. This report assesses whether (1) the National Guard has developed and DOD has visibility over capabilities that could support civil authorities in a cyber incident; and (2) DOD has conducted and participated in exercises to support civil authorities in cyber incidents and any challenges it faced. To conduct this review, GAO examined DOD and National Guard reports, policies, and guidance and interviewed officials about the National Guard's capabilities in defense support to civil authorities. GAO also reviewed after-action reports and interviewed DOD officials about exercise planning. National Guard units have developed capabilities that could be used, if requested and approved, to support civil authorities in a cyber incident; however, the Department of Defense (DOD) does not have visibility of all National Guard units' capabilities for this support. GAO found three types of cyber capabilities that exist in National Guard units: Communications directorates : These organizations operate and maintain the National Guard's information network. Computer network defense teams : These teams protect National Guard information systems, could serve as first responders for states' cyber emergencies, and provide surge capacity to national capabilities. Cyber units : These teams are to conduct cyberspace operations. However, DOD does not have visibility of all National Guard units' cyber capabilities because the department has not maintained a database that identifies the National Guard units' cyber-related emergency response capabilities, as required by law. Without such a database to fully and quickly identify National Guard cyber capabilities, DOD may not have timely access to these capabilities when requested by civil authorities during a cyber incident. DOD has conducted or participated in exercises to support civil authorities in a cyber incident or to test the responses to simulated attacks on cyber infrastructure owned by civil authorities, but has experienced several challenges that it has not addressed. These challenges include limited participant access because of a classified exercise environment, limited inclusion of other federal agencies and critical infrastructure owners, and inadequate incorporation of joint physical-cyber scenarios. In addition to these challenges, DOD has not identified and conducted a “tier 1” exercise—an exercise involving national-level organizations and combatant commanders and staff in highly complex environments. A DOD cyber strategy planning document states, and DOD officials agreed, that such an exercise is needed to help prepare forces in the event of a disaster with physical and cyber effects. Until DOD identifies and conducts a tier 1 exercise, DOD will miss an opportunity to fully test response plans, evaluate response capabilities, assess the clarity of established roles and responsibilities, and address the challenges DOD has experienced in prior exercises. The table below shows selected DOD-conducted exercises. GAO recommends that DOD maintain a database that identifies National Guard cyber capabilities, conduct a tier 1 exercise to prepare its forces in the event of a disaster with cyber effects, and address challenges from prior exercises. DOD partially concurred with the recommendations, stating that current mechanisms and exercises are sufficient to address the issues highlighted in the report. GAO believes that the mechanisms and exercises, in their current formats, are not sufficient and continues to believe the recommendations are valid, as described in the report.
In February 2005, Marine Corps combatant commanders identified an urgent operational need for armored tactical vehicles to increase crew protection and mobility of Marines operating in hazardous fire areas against improvised explosive devices, rocket-propelled grenades, and small arms fire. In response, the Marine Corps identified the solution as the up-armored high-mobility multi-purpose wheeled vehicle. Over the next 18 months, however, combatant commanders continued to identify a requirement for more robust mine-protected vehicles. According to the acquisition plan, in November 2006, the Marine Corps awarded a sole source indefinite delivery, indefinite quantity (IDIQ) contract and subsequently placed orders for the first 144 vehicles to respond to the urgent requirement while it conducted a competitive acquisition for the balance of the vehicles. In February 2007, the Assistant Secretary of the Navy for Research, Development, and Acquisition approved MRAPs entry into production as a rapid acquisition capability. In September of 2007, the Undersecretary of Defense for Acquisition, Technology, and Logistics designated MRAP as a major defense acquisition program with the Marine Corps Systems Command as the Joint Program Executive Officer. Quantities to be fielded quickly grew from the initial 1,169 vehicles for the Marine Corps identified in the 2005 urgent need statement to the current approved requirement of over 16,000 vehicles split among the Army, Marine Corps, Navy, Air Force, and Special Operations Command, plus others for ballistic testing. Three versions of the MRAP vehicle were acquired for different missions: Category I, the smallest version of MRAP, is primarily intended for operations in the urban combat environment, and can carry up to 7 personnel. Category II is a multi-mission platform capable of supporting security, convoy escort, troop or cargo transport, medical, explosive ordnance disposal, or combat engineer operations, and carries up to 11 personnel. Category III, the largest of the MRAP family, is primarily intended for the role of mine and IED clearance operations, and carries up to 13 personnel. MRAP vehicles were purchased without mission equipment—such as communications and situational awareness subsystems—that must be added before the vehicles can be fielded to the user. The military services buy the subsystems for their vehicles and provide them as government furnished equipment to be installed at a government integration facility located at the Space and Naval Warfare Systems Command in Charleston, South Carolina. DOD used a tailored acquisition approach to rapidly acquire and field MRAP vehicles. The program established minimal operational requirements, decided to rely on only proven technologies, and relied heavily on commercially available products. The program also undertook a concurrent approach to producing, testing, and fielding the most survivable vehicles as quickly as possible. To expand limited existing production capacity, the department expanded competition by awarding IDIQ contracts to nine commercial sources. To evaluate design, performance, producibility, and sustainability, DOD committed to buy at least 4 vehicles from each vendor. According to program officials, subsequent delivery orders were based on a phased testing approach with progressively more advanced vehicle test results and other assessments. To expedite the fielding of the vehicles, the government retained the responsibility for final integration of mission equipment packages including radios and other equipment into the vehicles after they were purchased. DOD also designated the MRAP program as DOD’s highest priority acquisition, which helped contractors and other industry partners to more rapidly respond to the urgent need and meet production requirements. Finally, some of the contractors involved in the acquisition responded to the urgency communicated by the department by investing their own capital early to purchase needed steel and other critical components in advance of orders. The decision on the part of the contractors to purchase components in advance of orders was not required under their contracts and was done at their own risk. DOD leadership took several steps to communicate the importance of producing survivable vehicles as quickly as possible, for example In May 2007, the Secretary of Defense designated MRAP as DOD’s single most important acquisition program and established the MRAP Task Force to integrate planning, analysis, and actions to accelerate MRAP acquisition. The Secretary also approved a special designation for MRAP—a DX rating— that requires related contracts to be accepted and performed on a priority basis over other contracts without this rating. The Secretary of the Army waived a restriction on armor plate steel, which expanded the countries from which DOD could procure steel. DOD allocated funds to increase steel and tire production capacity for MRAP vehicles as these materials were identified as potential limiting factors for the MRAP industrial base. DOD recognized that no single vendor could provide all of the vehicles needed to meet requirements quickly enough and invited vendors to offer their non-developmental solutions. The request for proposal made clear that the government planned to award one or more IDIQ contracts to those vendors that were determined to be the best value to the government. The Marine Corps awarded IDIQ contracts to nine vendors and issued the first delivery orders in early 2007 for 4 vehicles from each vendor for initial limited ballistic and automotive testing. One vendor could not deliver test articles in the time required and the Marine Corps terminated that contract at no cost to the government. According to program officials, vehicles from another vendor did not meet minimum requirements and the Marine Corps terminated the contract for convenience. Conventional DOD acquisition policy dictates that weapons be fully tested before they are fielded to the user. However, the need to begin fielding survivable vehicles as quickly as possible resulted in a phased approach designed to quickly identify vehicles that met the requirement for crew protection so they could be rapidly fielded. This approach resulted in a high degree of overlap between testing and fielding of the MRAP vehicles; orders for thousands of vehicles were placed before operational testing began and orders for thousands more were placed before it was completed. Figure 1 shows the concurrent nature of the overall test plan. The Director, Operational Test & Evaluation approved the MRAP Test and Evaluation Master Plan in 2007. Candidate vehicles underwent ballistic and automotive testing beginning in March 2007. The test plan included three phases of developmental tests (DT) of increasing scope as well as initial operational test and evaluation (IOT&E). Phase I included a limited evaluation by users. Phase II further evaluated vehicles at the desired level of performance against the ballistic threat, added more endurance miles to the automotive portion of the test, and included mission equipment such as radios and other electronic systems. Phase III raised the bar for ballistic performance to the emerging threat and assessed non-ballistic protection to include near-lightning strikes, high-altitude electromagnetic pulse, and nuclear, biological, and chemical decontamination tests. The automotive portion of the test increased endurance to 12,000 miles per vehicle. Developmental and operational tests were conducted from March 2007 through June 2008. Each of the six MRAP variants have also undergone IOT&E. All vehicles were rated operationally survivable and operationally effective with limitations by the Army Evaluation Center. These limitations were comprised of vehicle size, weight, mobility, and weapon dead space. All vehicles were also rated operationally suitable with limitations. These limitations were due to logistic shortfalls, payload restrictions, and restricted fields of view. Schedule and performance results for the MRAP have been very good overall. At the time of our review in July 2008, nearly all of the developmental and operational testing had been completed; the Marine Corps, the buying command for the MRAP, had placed orders for 14,173 MRAPs; and, as of May 2008, a little more than a year after the first contracts were awarded, 9,121 vehicles had been delivered. As of July 2009, 16,204 vehicles have been produced and 13,848 vehicles have been fielded in two theaters of operation. Total procurement funding for the MRAP vehicles, mostly through supplemental appropriations, was about $22.7 billion. According to DOD officials, the MRAP is providing safe, sustainable, and survivable transport for troops in the theater. It recognizes that MRAPs have limitations, particularly in the area of off-road mobility and transportability. Nonetheless, MRAPs are considered outstanding vehicles for specific missions. Twenty-one months elapsed from the time the need was first identified in February 2005 until the sole source IDIQ contract was awarded and subsequent orders were placed for the first 144 vehicles in November 2006. Three months elapsed between the award of the sole source contract and the delivery of vehicles under the orders placed pursuant to the contract in February 2007—about the same time the IDIQ contracts were awarded to multiple vendors for more vehicles. Testing of vehicles delivered under orders placed pursuant to the newly awarded contracts began one month later in March 2007. Initial operational capability was accomplished in October 2007 or about 33 months after the need was first identified. Ultimately, MRAP vendors have successfully increased their production rates to meet the delivery requirement (see fig. 2). Production began in February 2007 with one vendor producing 10 vehicles. By March 2008—a little more than a year after the contracts were awarded—6,935 vehicles had been produced. According to DOD officials, the MRAP provides survivable, safe, and sustainable vehicles for troops in theater. It is recognized that MRAPs have limitations, particularly in the area of off-road mobility and transportability. Nevertheless, MRAPs met minimum requirements for specific missions. Based on an earlier survey of over 300 soldiers interviewed in the field, warfighters were satisfied with MRAP overall, which offers significant improvement in survivability. MRAP vehicles were seen as well suited for combat logistics patrols, route clearance missions, raids, quick reaction forces, and other missions requiring large, dismounted force. MRAP vehicles were seen as not well suited for mounted patrols in constrained urban areas or extensive operations in off- road operations. As with any acquisition of this nature, there are lessons to be learned. On the positive side, it appears that quick action by the Secretary of Defense to declare the MRAP program DOD’s highest priority and give it a DX rating allowed the government and the contractors access to more critical materials than otherwise would have been available. The availability of funding mostly through supplemental appropriations was essential. In addition, the decisions to 1) use only proven technologies, 2) keep requirements to a minimum, 3) infuse significant competition into the contracting strategy, and 4) keep final integration responsibility with the government are all practices that led to positive outcomes. Challenges remain in the form of reliability, mobility, and safety, which have required some modifying of the designs, postproduction fixes, and adapting how vehicles were to be used. Also, long term sustainment costs for MRAP are not yet well understood and the services are only now deciding how MRAP will fit into their longer term organizations. This combination of actions executed to address the urgent need for accelerating the delivery of MRAP vehicles to theater was innovative and effective. Major vendors and key subcontractors responded to the urgency communicated by the department. According to vendor officials from four of the companies, they collectively invested a substantial amount of their own capital in anticipation of MRAP work. For example, some vendors purchased steel and other critical components in advance of delivery orders for MRAP vehicles in order to meet projected time lines. In other cases, vendors purchased or developed new facilities for MRAP production. Multiple vendors also formed teaming arrangements to meet the increase in vehicle delivery demands. As stated above, these actions on the part of the contractors were not required under their contracts and were done at their own risk. On the down side, because of unique designs, operating procedures, and maintenance for multiple vehicles from multiple vendors, vehicle maintenance and support has been somewhat complicated. To ease maintenance and support concerns in the near term, the MRAP program office established a centralized training entity where maintainers would be cross-trained on multiple vendors’ vehicles. Longer term, a key challenge for DOD will be to effectively manage maintenance personnel and vehicle repair parts without sacrificing vehicle operational availability. Also, long term sustainment costs for MRAP are not yet projected and budgeted. The Services are only now deciding how to fit MRAP vehicles into their organizational structures. Another lesson, based on operational use of the MRAP vehicles, was their lack of maneuverability and off-road capability. As a result, DOD is in the process of acquiring an all terrain version of the MRAP to address the more difficult terrain and road conditions in Afghanistan. While most of the vehicles met ballistic requirements, other issues were identified (reliability, mobility and handling, and safety). These issues required some modifying of the designs, postproduction fixes, or adapting how vehicles were to be used. Testing of proposed solutions to more advanced threats continues. The program office continues to enhance MRAP vehicle system performance through capability insertion initiatives executed via engineering change proposals. Such changes are verified through testing. This testing will be an ongoing process as additional upgrades are applied. What were the keys in DOD meeting the urgent requirement for fielding MRAP in a timely manner? First, DOD kept the requirements simple, clear, and flexible and did not dictate a single acceptable solution. Second, DOD made sure that only mature technologies and stable designs were used by setting a very short and inflexible schedule. DOD acting as integrator of government furnished equipment after initial delivery eliminated some risk and uncertainty. Third, MRAP was also given the highest possible acquisition priority and the participating contractors responded in positive ways to meet the needs. Fourth, full and timely funding for the acquisition was a definite plus. The question is, can this formula be applied to all of DOD’s major acquisitions and the broader acquisition process? The first two keys—simple requirements and mature technologies—certainly can be and, in fact, recent changes to the department’s acquisition policies and acquisition reform legislation passed by the Congress should enable these practices to be implemented easier than in the past. However, the MRAP program also owes it success to the third and fourth key practices as well—a DX rating as the highest priority acquisition in the department and nearly unlimited funding to meet the urgent need—that are not scalable to the broader acquisition process. Not every program can be a highest priority and acquisition funds are constrained. While the MRAP acquisition benefited from all of the practices mentioned above, the biggest differentiator between that rapid acquisition and other more common acquisitions in DOD was that it established requirements that could be achieved with existing technologies. Recent studies by the Defense Science Board, the Defense Acquisition Performance Assessment Panel (DAPA), and GAO all indicate that the department can and should acquire and deliver weapon systems that fulfill urgent warfighter needs to the field much more quickly. The DSB study recommends a dual acquisition path that allows for a “rapid” acquisition process for urgent needs and “deliberate” acquisition processes for others. It recommends a new agency, proposed as the Rapid Acquisition and Fielding Agency, that would be focused on speed, utilizing existing technologies, and acquisition flexibility to achieve the “75 percent solution” quickly. The DAPA Panel report, among other things, recommended that the acquisition process should never exceed 6 years from its beginning to initial operational capability of the acquired weapon system. It stated that mature technologies and achievable requirements are critical to the success of such time certain development efforts. GAO has issued multiple reports under our “best practices” body of work that underscore the need for faster development cycles and the need for mature technologies, well understood requirements, systems engineering knowledge, and incremental delivery of capabilities to enable quicker deliveries. As early as 1999, we concluded that successful product developments separated technology development from product development, invested time and money in ensuring that their technology base was vibrant and cutting edge, and eliminated technology risk from acquisitions. We noted that the DOD’s science and technology (S&T) organization would need to be organized and structured differently, provided more funding to take new technologies to higher levels of maturity, and would have to coordinate better with the department’s acquisition community to achieve the synergies necessary to reduce cycle times. We made recommendations along those lines. We believe that the “game changer” today in achieving rapid acquisition is the technology base. Finally, a broader lesson learned is that it may be time to invest the time, money, and management skills in the S&T community to enable the effectiveness we expect from the acquisition community. Mr. Chairman, that concludes my prepared statement. I will be happy to answer any of your questions. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As of July 2008, about 75 percent of casualties in combat operations in Iraq and Afghanistan were attributed to improvised explosive devices. To mitigate the threat from these weapons, the Department of Defense (DOD) initiated the Mine Resistant Ambush Protected (MRAP) program in February 2007, which used a tailored acquisition approach to rapidly acquire and field the vehicles. In May 2007, the Secretary of Defense affirmed MRAP as DOD's most important acquisition program. To date, about $22.7 billion has been appropriated for the procurement of more than 16,000 MRAP vehicles. This testimony today describes the MRAP acquisition process, the results to date, lessons learned from that acquisition, and potential implications for improving the standard acquisition process. It is mostly based on the work we have conducted over the past few years on the MRAP program. Most prominently, in 2008, we reported on the processes followed by DOD for the acquisition of MRAP vehicles and identified challenges remaining in the program. To describe DOD's approach for and progress in implementing its strategy for rapidly acquiring and fielding MRAP vehicles, we reviewed DOD's plans to buy, test, and field the vehicles and discussed the plans with cognizant department and contractor officials. To identify the remaining challenges for the program, we reviewed the results of testing and DOD's plans to upgrade and sustain the vehicles DOD use of a tailored acquisition approach to rapidly acquire and field MRAP vehicles was successful. The program relied only on proven technologies and commercially available products; established minimal operational requirements; and undertook a concurrent approach to producing, testing, and fielding the vehicles. To expand limited production capacity, indefinite delivery, indefinite quantity contracts were awarded to nine commercial sources, with DOD agreeing to buy at least 4 vehicles from each. Subsequent orders were based on a concurrent testing approach with progressively more advanced vehicle test results and other assessments. To expedite fielding of the vehicles, the government retained the responsibility for final integration in them of mission equipment packages including radios and other equipment. DOD also made MRAP its highest priority acquisition, which helped contractors and others more rapidly respond to the need and meet production requirements, in part by early investing of their own capital to purchase steel and other critical components in advance of orders. Schedule and performance results for MRAP were very good overall. In July 2008, nearly all testing was completed; the Marine Corps had placed orders for 14,173 MRAPs; and, as of May 2008, 9,121 vehicles had been delivered. As of July 2009, 16,204 vehicles have been produced and 13,848 vehicles fielded in two theaters of operation. Total MRAP production funding was about $22.7 billion, mostly through supplemental appropriations. In terms of lessons learned, MRAP's success was driven by several factors, including quick action to declare its acquisition DOD's highest priority and giving it a DX rating, which allowed access to more critical materials than was otherwise available. The availability of supplemental appropriations was also essential. However, while neither of these factors are practically transferable to other programs, decisions to 1) use only proven technologies, 2) keep requirements to a minimum, 3) infuse significant competition into contracting, and 4) keep final integration responsibility with the government all led to positive outcomes and may be transferable. Challenges to MRAP remain in its reliability, mobility, and safety, which required some modifying of designs and postproduction fixes, and adapting how vehicles were used. Also, long term sustainment costs are not understood and the services are only now deciding how MRAP fits them in the longer term. GAO's multiple best practices reports have underscored the need for the use of mature technologies, well understood requirements, systems engineering knowledge, and incremental delivery of capabilities to enable quicker deliveries. Finally, a broader lesson learned is that it is time to invest the time, money, and management skills in the science and technology community to enable the effectiveness we expect from the acquisition community.
SSA operates the Disability Insurance (DI) and Supplemental Security Income (SSI) programs—the two largest programs providing cash benefits to people with disabilities. The law defines disability for both programs as the inability to engage in any substantial gainful activity by reason of a severe physical or mental impairment that is medically determinable and is expected to last at least 12 months or result in death. The programs have grown substantially, from 10.7 million beneficiaries and $61 billion in benefits in 1995 to 11.4 million beneficiaries and $91 billion in federal benefits to individuals with disabilities in 2003. While disability benefits account for only 15 percent of SSA’s total benefit payments for its Old-Age, Survivors and Disability Insurance (OASDI) programs, administering the disability benefits accounted for 45 percent of the agency’s annual administrative expenses. The relatively high cost of administering the DI program reflects the complex and demanding nature of making disability decisions. SSA estimates that the cost of the disability programs will rise substantially in the near future as the baby boom generation reaches its disability-prone years. The disability determination process begins at a field office, where an SSA representative determines whether a claimant meets the programs’ non- medical eligibility criteria. Claims meeting these criteria are forwarded to the state DDS to determine if a claimant meets the agency’s definition of disability. At the DDS, the disability examiner takes the lead, or works as a team with the medical or psychological consultants, to analyze a claimant’s documentation, gather additional evidence as appropriate, and approve or deny the claim. A denied claimant may ask the DDS to reconsider its finding, at which point a different DDS team reviews the claim. If the claim is denied again, the claimant may appeal the determination to SSA’s Office of Hearings and Appeals (OHA), where it will be reviewed by an ALJ. The ALJ usually conducts a hearing in which the claimant and others may testify and present new evidence. In making the disability decision, the ALJ uses information from the hearing and from the state DDS, including the findings of the DDS medical consultant. A claimant whose appeal is denied may request a review by SSA’s Appeals Council and, if denied again, may file suit in federal court. Figure 1 provides an overview of SSA’s disability decision-making process and outcomes for 2003. SSA uses a sequential evaluation process when determining disability. First, SSA field office representatives determine whether a claimant is performing substantial gainful work. If not, DDS or ALJ adjudicators will assess the severity of a claimant’s medical condition(s) to determine whether it meets or equals the criteria in SSA’s regulations (commonly referred to as the medical listings). For a claimant whose conditions do not meet or equal the listings, adjudicators then focus on the functional consequences of the claimant’s medically determined impairments—that is, whether the claimant can perform work he or she has done in the past, and, if not, whether the claimant can perform other work in the national economy. Concerns about the rate of appeals for hearings, ALJs’ allowance rates, and the accuracy and consistency of ALJ decisions led the Congress to direct SSA to conduct a study in 1980 to determine the extent to which hearings decisions conformed to legal requirements and binding SSA policy. Since the allowance rates at the hearings level could be influenced by many factors, such as the introduction of new evidence, the purpose of the 1980 study was to present the same evidence on cases to different reviewers representing different adjudication levels. In determining the extent to which decision makers agreed on whether to allow or deny benefits, the study concluded that different levels of decision makers had significantly different allowance rates. Specifically, the ALJs decided to allow 64 percent of the cases, whereas the SSA’s central office quality assurance reviewers, comprising medical consultants and disability examiners, decided that only 13 percent of cases should be allowed. The study identified several possible causes of the disparity, including inconsistency in the standards and procedures, interpretation of the standards, and weight given to the evidence. The study also found that disability decisions are complex and necessarily involve some degree of subjectivity by adjudicators. To help address concerns raised by this and other studies, SSA began its process unification efforts to ensure that both levels more consistently interpreted and applied SSA’s policy guidance. SSA’s plans for its process unification initiative were part of SSA’s larger effort to redesign its disability claims process and were modified over time. SSA’s process unification plans included six major efforts, as described in table 1. In 1997, we reported on the possible reasons for the inconsistency of decisions between the initial and hearings levels. Our report found that differences in state DDSs’ and ALJs’ views on the claimants’ functional abilities was a key factor in explaining why ALJs allowed benefits on appealed cases. We also reported that poorly documented state DDS evaluations of the claims were of limited use to ALJs and SSA quality reviews did not focus on identifying inconsistency in decisions. To support SSA’s process unification efforts, the report recommended that SSA, using available systems and data collected so far, move quickly ahead to implement its quality assurance initiative to provide consistent feedback to DDS and ALJ adjudicators as soon as possible. In addition, we recommended that SSA expand its effort to return cases to a DDS for review when new evidence is introduced on appeal. Last, we recommended that SSA set goals for measuring the effectiveness of process unification in reducing inconsistent decisions. More recently, the Social Security Advisory Board issued a 2001 report that identified many factors that could potentially affect the overall consistency of disability decision making between adjudication levels. Some of the factors the board suggested as potentially affecting consistency included: the fact that most claims are decided based on a paper review of case evidence without face-to-face contact with an adjudicator until a claimant has an ALJ hearing, involvement of attorneys and other claimant representatives at the ALJ hearing, the fact that claimants are allowed to introduce new evidence and allegations at each stage of the appeals process, differences in quality assurance procedures applied to initial- and differences in the training given to ALJs and state examiners, and lack of clear and unified policy guidance from SSA. Despite SSA’s process unification efforts and related studies to improve the consistency of decisions, recent ALJ allowance rates—which declined after process unification began, but started increasing in 1999 to reach 61 percent in fiscal year 2003—still raise questions as to whether initial- and hearings-level decision makers are consistently applying the agency’s guidance. In addition to inconsistent application of SSA’s policy guidance, there are several other reasons why a large number of ALJ allowances are made. For example, some ALJ allowances should be expected because, by law, cases can remain open throughout the hearings process, allowing new evidence to be submitted that may not have been available to the state adjudicators. Such new evidence could show that the claimant’s condition has worsened and prohibits work. Also, SSA’s decision-making criteria require that a great deal of professional judgment be applied. As a result, some allowances at the hearings level could simply reflect the differing judgments of two adjudicators reviewing a case. While a claimant’s deteriorating health, changes in the characteristics of a claim over time, and the complexity of disability decisions may help to explain some of the ALJ allowances, studies have not sufficiently explained why consistently over half the cases appealed to the hearings level are allowed. Instead, studies indicate that systemic differences in the assessment of claims at both adjudication levels are contributing to the ALJ allowance rate. For example, our 1997 report noted a difference in state DDSs’ and ALJs’ views on the claimant’s functional abilities was a key factor in explaining why ALJs allowed cases on appeal. Inconsistency in decisions may create several problems. High hearings allowance rates may create the perception that the hearings level is applying SSA’s criteria less strictly than the initial level and create an incentive for claimants to appeal to an ALJ for a more favorable decision. If deserving claimants must appeal to the hearings level for benefits, this situation increases the burden on claimants, who must wait, on average, almost a year for a hearing decision and frequently incur extra costs to pay for legal representation. In addition, to the extent that the ALJ allowance rates include inappropriate allowances, SSA could be incurring unwarranted program costs. Although SSA has tried to address these problems, its inability to resolve them has contributed to our decision to include federal disability programs on our list of high-risk government programs. Renewing its effort to address long-standing and critical problems with the disability programs, SSA’s Commissioner recently announced a new proposal to improve these programs. (See app. I for an excerpt of the announcement that describes the newly proposed decision-making process.) In addition to proposing demonstration projects that provide work incentives and supports to help people with disabilities return to work, SSA has proposed significant changes to both the process of adjudicating disability claims and the structure and management of the agency’s quality management system to improve the timeliness, accuracy, and consistency of the disability decision-making process. The agency believes that several of these changes will help to improve consistency between DDS and ALJ decisions. For example, SSA plans to provide more centralized end-of-line quality reviews. According to SSA, the proposed quality reviews should help to hold adjudicators more accountable for their decisions and ensure that they consistently apply SSA’s policies as well as help the agency detect and amend those policy areas leading to inconsistent decisions. Table 2 provides a description of SSA’s proposed changes to improve the disability decision-making process. SSA does not plan to implement its proposed changes before it has successfully implemented its Accelerated Electronic Disability (AeDib) system. This major initiative should allow adjudication staff in states and throughout the agency, regardless of geographic location, to access case information electronically through the use of an electronic disability folder. The initiative is intended to reduce delays that result from mailing, locating, and organizing paper folders. SSA also expects this new system to provide critical management information for analyzing and reducing inconsistencies in disability decisions. SSA is implementing the new system and plans to give adjudicators time to adjust to this change before implementing its new proposal. SSA’s implementation of the new proposal will therefore be no earlier than October 2005. In the meantime, SSA continues to discuss the proposal with stakeholders and plans to further refine it before implementation. SSA has partially implemented its process unification initiative. Although the agency initially made improvements in its policies and training intended to improve the consistency of decisions between adjudication levels, it has not continued to actively pursue these efforts. As part of the initiative, the agency also implemented a review of ALJs’ allowance decisions to identify additional ways to improve training and policies, but no new changes were made as a result of findings from the review. Finally, the agency also began two tests of process changes to help improve the consistency of decisions, but one ongoing test with design problems is not likely to lead to any conclusive results and the other test has been abandoned. While SSA initially made progress carrying out efforts to improve policies and training to better ensure the consistency of decisions, the agency has not continued to actively pursue these efforts. SSA quickly accomplished most of its planned efforts to clarify policy guidance. In 1996, SSA issued nine process unification rulings to clarify policy areas it found to be contributing to inconsistent decisions. For example, one ruling provided all adjudicators with guidance on how to weigh and document their evaluation of the treating physician’s opinions when making a disability decision. SSA successfully went through the regulatory process several years later and published three new regulations to strengthen its process unification rulings, but was unable to agree on a fourth regulation regarding the weight to be given to the treating physician’s opinion when evaluating a claim. SSA planned to develop a single presentation of policy guidance to replace the different sources used by each level, but has since abandoned full implementation of these plans in favor of a more limited approach. DDS adjudicators currently follow a detailed set of policy and procedural guidelines, whereas ALJs rely directly on statutes, regulations, and rulings for guidance in making disability decisions. To help ensure that inconsistent guidance was not contributing to inconsistent DDS and ALJ decisions, SSA began issuing guidance in the same wording to all adjudicators in 1996. Although SSA had also planned to address differences in policy guidance issued before 1996 and to eventually combine existing adjudication policy documents into a single document, it ultimately decided not to take these additional steps. According to SSA, further efforts to unify the policy guidance used by both levels would be a massive undertaking and not worth the cost because the guidance issued since 1996 had already addressed important policy areas that were leading to inconsistent decisions. While some stakeholder groups representing adjudicators tended to agree with SSA’s position, the Social Security Advisory Board and other groups still believe the agency should take additional steps to provide a unified policy guide to all adjudicators. Instead of creating one policy manual for all adjudicators, SSA told us that it plans to undertake a comprehensive effort to evaluate and improve its disability policies to make them less susceptible to differing interpretations and to ensure they are up to date. A more comprehensive approach could address key weaknesses in SSA’s disability program that we previously highlighted in our performance and accountability series, and thereby help to modernize federal disability programs to better meet the needs of Americans with disabilities. Early on, SSA also provided extensive cross-training of DDS and ALJ adjudicators, although the scope of its efforts has since diminished. To help all adjudicators understand how to appropriately apply process unification rulings, SSA provided extensive and mandatory training in 1996 and 1997 to 15,000 disability adjudicators (including DDS examiners, physicians, ALJs, and quality assurance staff). The training was provided to adjudicators at all levels of the process in three of the most complex disability areas—assessment of symptoms, treatment of expert opinions, and assessment of claimants’ remaining capacity to work (i.e., residual functional capacity). While this training was intended to be ongoing, SSA’s training efforts have diminished significantly since 1997. Stakeholder groups representing DDS adjudicators told us that SSA’s training does not sufficiently cover process unification issues. In addition, our review of DDS and OHA participation in video training revealed inconsistent participation in training by adjudicators. To provide ongoing training to both adjudication levels and other components involved in the claims process, SSA has used interactive video technology. Almost all the state DDS sites and about 85 percent of OHA offices have this technology. However, in reviewing participation for two recent courses, we found for those sites with this interactive technology only 31 percent of DDS sites and 16 percent of OHA sites logged on for a course on the role of consultative examinations, and 18 percent of DDS sites and 4 percent of OHA sites logged on for a monthly disability hour training class. According to SSA, neither DDS nor OHA adjudicators are generally required to attend courses. In line with these findings, our recent report on the human capital challenges facing DDSs found gaps in the key knowledge and skills of their adjudicators in the same areas SSA had earlier identified as critical to making consistent decisions, and we recommended that SSA work with DDSs to close these gaps. Despite SSA’s early efforts to improve policy guidance and provide training, stakeholder groups representing state adjudicators told us that many states are not performing the additional development and documentation of decisions required by the process unification rulings. They also told us that the rulings have added significantly to the time, complexity, and subjectivity of the decision-making process, while insufficient resources have limited their ability to fully implement the rulings’ requirements. In addition, claimant lawsuits against three state DDSs have alleged that DDS adjudicators were not following SSA’s rulings or other decision-making guidance. In settling these lawsuits, SSA agreed to have these states fully develop and document cases. However, according to DDS stakeholder groups, SSA has not ensured that states have sufficient resources to meet ruling requirements, which they believe may lead to inconsistency in decisions among states. Furthermore, SSA’s quality assurance process does not help ensure compliance because reviewers of DDS decisions are not required to identify and return to the DDSs cases that are not fully documented in accordance with the rulings. SSA’s procedures require only that the reviewers return cases that have a deficiency that could result in an incorrect decision. As part of its initiative, the agency has also implemented a quality review of ALJ decisions, but the review has not proved useful for identifying any new changes to SSA’s policies or training that would help to address the inconsistency of decisions. This review—referred to as the ALJ Pre- effectuation Review—involves a sequential review by SSA’s OQA and the Appeals Council of certain ALJ allowances that have not yet been finalized (i.e., the claimant has not yet been awarded benefits). In selecting allowances for review, OQA uses an error-prone profile developed from its analysis of errors detected when reviewing DDS allowances. SSA began testing the new review of ALJs’ decisions in 1996 and implemented it as an annual review in 1998. From fiscal years 1998 through 2002, OQA reviewed 27,148 ALJ allowances and of these, OQA found fault with about 35 percent and referred them to the Appeals Council. The Appeals Council screens the allowances for its own review and selects those in which the prior actions may not have been proper, fair, or in accordance with the law or the ALJ’s decision was not supported by substantial evidence. If the council finds fault with the ALJ’s decision, it will deny the claimant benefits or return the claim to the ALJ to have the identified problems corrected. If the council does not find fault with the ALJ’s decision, the claimant will be awarded benefits. In addition to identifying inappropriate ALJ allowances, SSA intended to use the new quality review to identify areas of inconsistency between adjudication levels and ways to improve policies and training to address those inconsistencies. Specifically, OQA identified cases where it found fault with the ALJ decision, but the Appeals Council, after screening them, did not accept them for review. OQA then forwarded these cases to a panel of staff from the various components involved in SSA’s claims process to determine whether the inconsistent assessment of these cases by OQA and the Appeals Council indicated the need to clarify policies, issue new policies, or provide training to improve the consistency of decisions. However, according to a SSA official, this review did not identify any new areas of inconsistency that required improvements to policy and training. Weaknesses in the design of the review may have contributed to SSA’s inability to identify new policy areas contributing to inconsistency. For example, rather than reviewing a random sample of all ALJ decisions, this review focused on allowances. Further, the review looked only at ALJ allowances that were selected using a DDS error-prone profile, i.e., a profile that is based upon cases in which quality reviewers did not agree with the DDS adjudicators’ decisions. As a result, SSA selected and reviewed nonrandom allowance decisions with case characteristics that the agency may have already suspected were associated with inconsistent decisions. In 1999, the panel was disbanded because members had other priorities needing attention. OQA told us that it continued to perform a limited review of cases viewed differently by OQA and the Appeals Council. More recently, OQA began an effort to summarize the results of its review and expected to issue a report of its findings in April 2004. As of April 2004, this report had not been issued. SSA began two tests of potential changes to the process to help improve the consistency of decisions, but neither test was successfully completed. The changes tested were (1) more fully developing and documenting decisions made at the initial level and (2) sending appealed cases that involve new medical information back to the initial level to be reevaluated. SSA wanted to test having DDSs more fully develop and document decisions because it believed that DDS decisions, especially denials, are often not well documented. SSA wanted to test whether better explanations of why benefits were denied would improve the accuracy of DDS decisions and consistency of decisions between adjudication levels. SSA first implemented a pilot of this change to explore alternatives for developing and documenting decisions. Then SSA tested this change, along with other process changes, in a larger test, called the prototype initiative. Concurrently, SSA tested other process changes, such as the elimination of a reconsideration step and a predecision DDS interview with the claimant. The prototype test had limitations for predicting the impact of documented decisions. For example, SSA’s decision to test several changes together left the agency without clear information on what impact fully developed decisions would have on the decision-making process without the other process changes. SSA’s test design also did not build in an ALJ feedback mechanism to provide sufficient information on the usefulness of more fully documented decisions. SSA continues to test this change along with other changes and, despite limited information on the best approach for and impact of this change, currently plans to implement more fully documented decisions as part of the Commissioner’s new proposal to improve SSA’s disability programs. SSA also began, but ultimately abandoned, a test in which appealed cases with new medical information submitted prior to the hearing were to be sent back to the initial level so that the evidence could be evaluated by medical consultants residing at the DDSs. Since medical expertise resides in the DDS and not at the hearings level, SSA decided to test whether “remanding,” or sending cases to the DDS for evaluation, might result in a more consistent review of medical evidence. SSA believed that this change, in turn, could help improve the consistency of decisions because the new medical information might be contributing to ALJ allowances. However, the change also had the potential to increase the time claimants with remanded claims would have to wait for final decisions because claims that were not allowed by the DDSs had to be returned to OHA for hearings. SSA began remanding cases in July 1997, with a 1-year goal of remanding 100,000 cases, but after 10 months, it had remanded fewer than 9,000. In implementing this test, SSA encountered several difficulties. For example, it had difficulty identifying the claims to be remanded and ensuring the ALJs, who had authority over the claims, would remand the claims to the DDSs. The ALJs’ resistance to remanding claims to the DDSs may be due in part to concerns that remanding would not lead to many allowances by the DDSs and would result in many claims being returned to OHA, thereby increasing the time many claimants would have to wait for a final decision from OHA. Realizing that the agency would not be able to reach its remanding goal, the agency decided to discontinue this test. SSA’s assessments have not provided the agency with a clear understanding of the extent and causes of possible inconsistencies in decisions between adjudication levels. The two measures SSA uses to monitor changes in the extent of inconsistency of decisions have weaknesses and therefore do not provide a true picture of the changes in consistency. In addition, SSA has not sufficiently assessed the causes of possible inconsistency. The agency conducted an analysis in 1994 that identified some potential areas of inconsistency. However, although SSA continues to collect information that would support this analysis, it has not repeated this initial effort, nor has it expanded on it by employing more sophisticated assessment techniques. SSA has made some efforts to monitor changes in the extent of inconsistency between the initial and hearings levels, including tracking trends in allowance rates at different levels and conducting special reviews of ALJ decisions. Together, according to SSA, these measures and assessments suggest that the consistency between levels has improved since the agency began implementing its process unification initiative. However, because of methodological weaknesses, these measures provide, at best, a partial picture of trends in the consistency of decisions between adjudication levels. SSA tracks trends in the proportion of all allowances decided at each level to assess the consistency of decisions between levels. The agency collects information on the number of allowances granted to claimants at each level of the process, tracks the proportion of claims allowed at the initial level relative to the hearings level, and looks at trends in these proportions over a period of several years. According to data from SSA, the proportion of overall allowances that occurred at the initial level has increased since process unification was implemented. As shown in figure 2, in fiscal year 1996, 72 percent of all allowances were granted at the initial level. This proportion increased in most subsequent years, and by fiscal year 2003, 77 percent of all allowances were granted at the initial level. Officials from OQA, the office responsible for reviewing, evaluating, and assessing the integrity and quality of the administration of SSA’s programs, view the relative shift toward earlier allowances as an indicator that consistency between adjudication levels has improved, and they believe that process unification efforts have contributed to these results. However, SSA’s measure of tracking yearly changes in the proportion of allowances at each level is a simplistic and inconclusive indicator of trends in the consistency of decisions because it does not control for the multitude of factors that can affect allowance rates at either adjudication level in any given year and over time. For example, SSA uses “snapshot” data in looking at the proportion of allowances granted at each level, meaning that it looks at the number of claimants and allowances at each level during a given year, rather than following a 1-year cohort of initial claimants through the entire process and capturing the proportion of allowances for that cohort decided at each level. Because SSA uses data that illustrate allowance rates at a given moment in time, it captures a different pool of claimants in the process at each level, and the resulting allowance rates are subject to a different set of demographic and case characteristics. Over time, the pool of claimants may change because of factors such as a downturn in the economy, which can cause more people with less severe impairments to claim benefits or appeal initial denial decisions. In addition, snapshot data may be significantly affected by fluctuations in productivity at either adjudication level caused by process changes that are unrelated to process unification and that affect only one level. SSA has collected other data to further assess trends in the consistency of decisions. Since 1993, the agency has conducted a biennial case review as part of its Disability Hearing Quality Review process. This review consists of medical consultants and disability examiners in SSA’s central office evaluating a sample of ALJs’ decisions plus supporting documentation to determine whether the ALJ has adequately supported his or her decision. In evaluating the ALJ decisions, these medical consultants and disability examiners use the same standards as those used by initial-level adjudicators to adjudicate claims, which are from the official SSA program policy and operations guidance found in POMS. To some degree, therefore, the medical examiners and disability reviewers serve as a proxy for initial-level adjudicators, and their decisions are representative of how initial-level examiners should be deciding claims. While unpublished results from the biennial case reviews indicate an increase in supportable ALJ allowances, such findings focus on the ALJ level and therefore provide only a partial picture of trends in consistency. The reviews indicated that medical consultants and disability examiners have found that supportable ALJ allowances increased from 36 percent in fiscal year 1993-94 to 57 percent in fiscal year 1999-2000. OQA officials told us that this increase suggests an improvement in consistency between adjudication levels because it indicates that disability examiners using initial-level standards and ALJs increasingly agree on how like cases should be decided. However, SSA’s assessment provides only a partial picture because it does not reflect trend information on the extent to which ALJs have found DDS decisions to be supportable, to ensure that both levels are making more consistent decisions. Although the 1994 report of findings from the initial biennial case review included the results of a special probe in which ALJs reviewed 165 DDS reconsideration denial decisions, the sample was not representative, and therefore results could not serve as a baseline for developing trend information. In 2003, SSA began another probe, in which ALJs reviewed 400 DDS reconsideration denial determinations, but the agency does not plan to release its findings until summer 2004. Although SSA has limited information on how ALJs view DDS decisions, other information collected by the agency suggests that consistency of decision making at the initial level might not be improving. For example, OQA reviewers routinely assess the accuracy and supportability of DDS decisions. A recent SSA study of these data shows that the accuracy of DDS denial decisions—those decisions most likely to be appealed to the hearings level—has declined by 4 percentage points over a 1- year period. Another review of DDS decisions by OQA reviewers also suggests a lack of improvement at the initial level. Specifically, the extent to which quality reviewers found that DDS reconsideration denials appealed to the hearings level were supported declined from 71 percent in fiscal year 1993-94 to 68 percent in fiscal year 1999-2000. Despite some efforts to assess inconsistency in decisions, shortcomings in SSA’s analyses also limit its ability to identify areas and causes of possible inconsistency. Most notably, over the last 10 years, SSA has not updated its prior analyses of information from its initial biennial case review that helped identify problem areas. In addition, SSA has not improved on its case review and analysis by ensuring that reviewers assess all relevant case evidence used to make decisions, or performed more sophisticated analysis to identify the areas and causes of inconsistency in decisions. Other efforts—including the review of ALJ allowances and a probe of DDS reconsideration denials—have yet to yield useful information. In 1994, for its initial biennial case review report, the agency took its first step in identifying areas of possible inconsistency by identifying two characteristics about the claimants and their cases over which initial-level reviewers tended to disagree with ALJs. Specifically, the 1994 report concluded that teams of reviewing medical consultants and disability examiners sometimes viewed cases involving mental impairments differently than the reviewing ALJs. In addition, these two sets of reviewers tended to have different views on the severity of claimants’ impairments and their resulting capacity to work. According to the official responsible for overseeing the review, the findings in this initial report provided important support for SSA’s process unification efforts as well as the agency’s efforts to redesign the disability claims process. SSA continues to conduct the biennial case reviews; however, the agency has not continued to analyze and identify areas that are viewed differently by different adjudication levels. Specifically, SSA no longer identifies the particular case characteristics over which reviewers from the two levels tend to disagree. As a result, SSA does not know whether previously identified problem areas are still present. Moreover, SSA no longer publishes any information from the medical consultant and disability examiner biennial case reviews, even though it has performed some limited analysis of the supportability of decisions made by adjudicators. By not continuing to publish its analysis and findings, the agency makes it difficult to ensure the reliability of its methods and results, and leaves stakeholders outside the agency, including disability groups, without a means for understanding SSA’s assessment efforts and progress in improving the consistency of decisions. The SSA office conducting the study has told us that, because of downsizing and competing priorities, it has no current plans to further analyze and publish these data. Further, in its ongoing biennial case reviews, SSA does not make full use of available case information that would be useful in identifying areas and causes of inconsistency. Specifically, medical consultants and disability reviewers do not listen to tapes of the hearings and therefore do not review the entire case as presented to the original ALJ. Although reviewing medical consultants and disability examiners read the ALJs’ explanations for their original decisions, which should include the most important factors behind the ALJs’ decisions, the reviewers do not evaluate the oral evidence independently. An SSA official with whom we spoke indicated that some evidence entered by witnesses at the hearing might not be accompanied by other hard copy sources of the same information. Therefore, reviewers would not consider information potentially relevant to the ALJ’s decision that could be used to identify areas and causes of inconsistency. SSA also does not make full use of the information it collects because it has not employed analytical tools that would improve its ability to identify areas and causes of inconsistency. For example, SSA’s biennial case reviews provide a rich dataset that lends itself to regression analysis to identify areas and possible causes of inconsistency between levels. Regression analysis would allow the agency to better pinpoint any significant case characteristics affecting decisions and to more clearly identify the underlying causes of inconsistency. Specifically, among the data collected in this review are such variables as the types of impairments the claimant has, the types of relevant medical evidence, and additional impairments presented at the hearing. Multivariate analysis, such as a multiple regression model, could allow SSA to assess how these and many other factors, relative to one another, contribute to whether a case results in a similar outcome at both levels. However, SSA has not employed this more sophisticated multivariate technique, citing resource constraints and competing priorities. We recognize the methodological complexities of analyzing disability decisions, and we previously recommended that SSA establish an advisory panel of external experts from a range of disciplines to provide leadership, oversight, and technical assistance to the agency. Otherwise, in forgoing such analysis, the agency will continue to miss an opportunity to better pinpoint areas and some possible causes of inconsistency in decisions between the two adjudication levels, and to lay the foundation for further investigation. Another tool SSA has not sufficiently employed for identifying areas and causes of inconsistency is in-depth case studies involving both levels of adjudication. Case studies, in which different adjudicators review the same test case, can be a means for unearthing causes for inconsistency by getting adjudicators from both levels to acknowledge and address discrepancies in the ways they view cases. SSA has performed case studies in the past to ascertain differences in policy interpretation between DDS examiners and quality reviewers. However, SSA does not routinely have both DDS examiners and ALJs perform in-depth review of the same sample of cases, despite this method’s potential for helping identify causes of inconsistency between the two adjudication levels. OQA officials told us that case studies are a very resource-intensive tool because they need a sufficient number of cases from which to generalize. Therefore, the agency is reluctant to use this approach to help it understand the causes of inconsistency between adjudication levels. However, using multivariate analyses of the biennial case review data could help the agency to more effectively target its in-depth case studies on those areas found to be leading to inconsistent decisions and thereby increase its success at identifying the causes of inconsistency. SSA conducts other analyses of inconsistency between levels, but to date these efforts have yielded limited information concerning areas and possible causes of inconsistency. For example, as part of SSA’s ALJ Pre- Effectuation Review, two different levels of reviewers have evaluated thousands of cases. However, limitations in the review methodology, such as not using a random sample of ALJ decisions, do not allow the agency to use this review to identify the leading causes of inconsistency. SSA recently began an evaluation of this effort and plans to publish its findings and recommendations in April 2004. Another analysis currently under way, a special 400-case review, might help identify areas of inconsistency at the initial level, but it has yet to be completed. Begun in 2003, this review by ALJs of DDS reconsideration denial determinations is expressly aimed at assessing inconsistency between adjudication levels. SSA expects to gain some understanding of why about 60 percent of cases denied by the initial level and appealed to the hearings level are allowed. The agency plans to publish its findings in summer 2004. Some changes included in SSA’s new proposal to overhaul its disability claims process may improve the consistency of DDS and ALJ decisions, but challenges may hinder the implementation of the proposal. The new proposal includes several changes to the disability claims process that the agency and stakeholder groups representing adjudicators and claimant representatives believe offer promise for improving the consistency of DDS and ALJ decisions. However, past difficulties in improving the process, as well as stakeholder concerns about limited resources and other obstacles, indicate that some difficulties may arise in the development and implementation of SSA’s new proposal. SSA told us that several aspects of the new proposal may improve the consistency of decisions, and although opinions varied among stakeholder groups, most thought the following four proposed changes have the potential to improve the consistency of decisions between adjudication levels: (1) requiring state adjudicators to more fully develop and document their decisions, (2) centralizing the agency’s approach to quality control, (3) providing both adjudication levels with equal access to more centralized medical expertise, and (4) requiring ALJs to address agency reports that either recommend denying the claim or outline the evidence needed to fully support the claim. Representatives from most stakeholder groups with whom we spoke told us that having state adjudicators more fully develop and document their decisions may help to improve the consistency of DDS and ALJ decisions. Specifically, stakeholders said that more developed decisions may provide ALJs with a better understanding of the DDS decision and enable them to more fully consider this information when evaluating a case. According to the agency and stakeholders, this change may contribute to a more consistent interpretation and application of SSA’s decision-making criteria. They also mentioned that well-developed decisions by DDS examiners could assist SSA in holding adjudicators accountable for case development and decisions, such as enabling quality reviewers to more effectively assess the appropriateness of the DDSs decisions. Unlike SSA’s earlier attempt at more fully developing decisions as part of process unification, SSA plans to incorporate a reviewing official into the process whose assessment of all appealed DDS decisions can provide feedback on the extent to which cases are being fully developed. In addition, the agency and many stakeholders told us that they believe centralizing the agency’s quality control system may help resolve some problems contributing to inconsistent decisions between the two levels. For example, they believed that it may help ensure a more consistent review of cases across the country and between adjudication levels. According to both stakeholders and other experts within and outside of SSA (including SSA’s Deputy Commissioner of Disability and Income Security and a consulting group that reviewed SSA’s quality assurance system), the current quality control and case review process encourages adjudicators at the initial level to inappropriately deny cases, while encouraging adjudicators at the hearings level to inappropriately allow cases. Specifically, by overemphasizing a review of DDS allowances to help control the cost of benefits, the agency has unintentionally encouraged DDS examiners to deny cases. Conversely, SSA’s review of ALJ decisions consists mostly of SSA’s Appeals Council reviewing cases denied by ALJs, thereby providing an incentive for ALJs to allow cases. By centralizing the quality control system and making other changes to the process, SSA believes that it can remove the current incentives that contribute to inconsistency. The third proposed change that the agency and most stakeholder groups believe may improve consistency is SSA’s plan to provide both adjudication levels with equal access to more centralized medical experts, organized by clinical specialty. Although located in the regions, these experts should be able to review cases from across the country with the successful completion of SSA’s AeDib initiative—an electronic folder initiative for exchanging case information currently being implemented by SSA. By making experts in a range of specialties available to assist both levels of adjudicators in their decision making, SSA and stakeholders believe that adjudicators could more consistently apply SSA’s decision- making criteria, in addition to acquiring better medical evidence. Finally, the agency and most stakeholder groups told us that the requirement to have an ALJ’s decision address the recommendations from a reviewing official’s report to either deny or more fully develop the claim may increase consistency between levels. Under the new proposal, SSA plans to introduce a reviewing official into the process to evaluate all appealed DDS claims. The official will allow claims that meet SSA’s definition of disability and, for the remaining claims, will develop a report that either (1) contains reasons for denying the claim or (2) outlines the evidence needed to fully support the claim. The ALJ’s decision must address issues raised in the reviewing official’s report. Stakeholders believed that this change could, as intended by SSA, hold adjudicators more accountable for their decisions and provide adjudicators with feedback on the reasons decisions tend to differ between levels to improve the quality and consistency of their decisions. Although there was less agreement among stakeholder groups on the potential effect that other aspects of the new proposal may have on the consistency of decisions, some groups thought that other changes could result in improved consistency between DDS and ALJ decisions. For example, the Social Security Advisory Board and two groups representing the DDSs thought that the proposed in-line quality control, if implemented effectively at all levels, could have a positive impact on consistency by ensuring that adjudicators adhere to the rulings and regulations throughout the decision-making process. One stakeholder group added that in-line quality control could also help the agency identify problem areas, including areas in which policy is applied inconsistently or where more training is needed. According to stakeholder groups—and based on SSA’s prior experience with making significant changes to its claims process—insufficient resources and other obstacles may prove to be major challenges for the agency in developing and implementing aspects of its new proposal. For example, experience with the process unification initiative has shown that limited state resources have hindered the agency’s ability to have state adjudicators fully document decisions. To address this issue, SSA plans to reduce the states’ workloads by decreasing the number of claims to be decided by the DDSs. Specifically, SSA expects that establishing regional expert review units to make quick decisions for claimants who are obviously disabled will substantially decrease the states’ workloads. However, SSA has not developed and provided stakeholders with estimates of the administrative cost for more fully documenting decisions and other planned changes, and stakeholder groups were not convinced that the reduction in claims was sufficient to offset resources needed to fully document their decisions. Although the agency has had some recent success in increasing its 2004 administrative budget, and is confident that it will be successful in acquiring the resources it needs to implement the proposal, the significance of stakeholders’ concerns about funding cannot be assessed until SSA fully develops its proposal and associated cost estimates. Experience has also shown that another proposed change, developing a centralized quality control system for both adjudication levels, could be a major challenge for the agency. In 1994, SSA began efforts to create a unified and comprehensive quality control system as part of its redesign efforts, but made little progress, in part because of considerable disagreement among internal and external stakeholders on how to accomplish this difficult objective. To get external assistance in developing an effective quality assurance system, SSA contracted with an independent consulting firm to assess SSA’s quality assurance practices used in the disability claims process. In 2001, concluding that SSA could achieve its quality objectives for the disability program only by adopting a broad, modern view of quality management, the consulting firm recommended SSA abandon its current system and design a new quality management system focused on building quality into the process. The agency agreed that it was appropriate to transform the existing quality assurance system and established an executive work group to decide a future course of action. The agency is working with another consulting group to further develop the changes recently proposed by the Commissioner. However, after 10 years of efforts to develop a more unified quality review system, SSA has not yet formulated changes to its quality review system, beyond the brief and general descriptions provided in the Commissioner’s new proposal. Other obstacles also add to the complexity and difficulty of implementing the proposal. For example, stakeholder groups have raised concerns about SSA’s ability to successfully implement its proposed change to provide equal access for all adjudicators to more centralized medical expertise by removing medical expertise from the state DDSs and providing expertise in regional offices. Stakeholder groups were concerned that SSA would not be able to attract and retain sufficient medical experts to meet the agency’s needs. They told us that states are currently experiencing problems attracting medical experts because SSA’s compensation rates are too low. State adjudicators, who currently work with medical experts directly at DDS offices, were also concerned that removing these experts and placing experts in SSA regional offices would impair the states’ effectiveness and efficiency. By placing experts in regional offices, state disability examiners would no longer have on-site access to these experts who help facilitate the states’ adjudication of claims and provide on-the- job training and mentoring to DDS examiners. Stakeholders have also raised questions about SSA’s ability to ensure that ALJs’ decisions fully respond to the reviewing officials’ reports and the ultimate effectiveness of this change. Stakeholder groups representing ALJs and claimant representatives believed that the requirements may have the potential to impinge on an ALJ’s legal responsibility to ensure a claimant receives a fair hearing and an independent decision. Other groups have raised concerns about SSA’s ability to ensure that ALJs will adequately address recommendations in the reviewing officials’ reports to help ensure that this requirement leads to more consistent decisions. Although these concerns have been raised, the Commissioner has clearly stated that the intent of the proposal is to improve service to claimants, including providing fair and accurate decisions, and that changes will not impinge on the independence of ALJs. In addition, several stakeholder groups also told us that staffing the new reviewing official positions with attorneys, as SSA intends to do, would be expensive. To the extent that SSA has difficulty filling these positions, the agency could create a slowdown or bottleneck in the process that could increase the time claimants must wait for a decision. Furthermore, according to one stakeholder group, SSA’s new quality assurance process will need to ensure that this new position does not create another source of inconsistent interpretation and application of SSA’s decision-making criteria. Several groups representing hearings level adjudicators and claimant representatives were also concerned about other aspects of the Commissioner’s new proposal, such as the proposed elimination of the Appeals Council and the claimants’ loss of the right to appeal an ALJ decision to the council. The Appeals Council currently reviews about 100,000 appealed ALJ decisions annually. For these claims, the council provides an additional appellate step for addressing claimants’ objections to the ALJs’ decisions, reviewing new medical information on the claims and reducing the number of claims appealed directly to the federal courts. According to one stakeholder group, the council also performs other important functions, such as reviewing claims for surviving children or spouses of workers who were insured under the disability insurance and retirement program. The council also reviews cases remanded from federal courts. This stakeholder group also told us that as SSA refines its proposal it will need to articulate how all of the council’s functions will be handled under the new process. Adding to uncertainties about the proposal’s success is its dependence on the successful development and implementation of the AeDib system—a highly complex and as yet unproven system using electronic folders to share information with all entities involved in disability determinations. SSA does not plan to implement its newly proposed changes before it has completed a national rollout of its electronic disability system, scheduled to be completed by October 2005. The new electronic disability system represents an important step toward a paperless and more efficient sharing of information by multiple partners involved in the disability claims process, including SSA and state officials, as well as physicians and other members of the medical community who provide needed medical evidence. SSA also expects this new system to provide critical management information for analyzing and reducing inconsistencies in disability decisions. As we previously reported, SSA has made progress developing the new system. However, its approach involves risks that could jeopardize the agency’s successful transition to an electronic disability claims process. For example, SSA recently began a national rollout of the electronic disability system without fully evaluating pilot test results or ensuring the resolution of all critical problems. Skipping such important steps in development and implementation leaves the new system vulnerable to problems in its performance and reliability. In addition, problems with implementation of this system could delay the implementation of SSA’s new proposal. SSA recognizes that transforming its massive and complex disability programs and achieving the benefits envisioned by the Commissioner will be a challenging undertaking. The agency is refining its proposal and, as part of this process, is actively seeking input from stakeholder groups. The Commissioner and her staff have met directly with stakeholder groups to understand and begin to address their concerns. As the agency refines its proposal, the significance of both stakeholder concerns and previous problems SSA has experienced improving its programs should become clearer. When SSA’s Commissioner announced her new proposal to overhaul the disability programs, the agency acknowledged the importance of making similar decisions on similar cases and making the right decision as early in the process as possible. SSA has good cause to focus on the consistency of decisions between adjudication levels. Incorrect denials at the initial level that are appealed increase both the time claimants must wait for a decision and the cost of deciding cases. Incorrect denials that are not appealed may leave needy individuals without a financial or medical safety net. Conversely, incorrect allowances at any adjudication level could substantially increase the cost of providing disability benefits. While the agency has made some effort to assess the inconsistency in decisions between levels, its efforts have not provided the agency with a clear understanding of the extent and leading causes of possible inconsistencies in the interpretation and application of disability guidance. For example, SSA’s assessment of ALJ error-prone allowances has not proven to be effective at identifying new areas and causes of inconsistency. SSA also has not updated its more effective approach of analyzing its Disability Hearings Quality Review data to identify problem areas and help improve its understanding of the factors that may be contributing to inconsistency. Further, SSA’s analysis lacked sophisticated statistical techniques and in-depth analysis of cases by adjudicators at both levels, which together would have allowed SSA to better identify and address the areas and leading causes of inconsistency. Moreover, by not having examiners and medical consultants perform a complete review of all relevant information before an ALJ, SSA has limited its ability to understand the areas and causes of possible inconsistency. Without better information on the areas and causes of possible inconsistency, the agency cannot ensure that the Commissioner’s new proposal will help to resolve this complex and long-standing concern. By taking immediate actions to improve its understanding of the leading causes of possible inconsistency in decisions, the agency will have information needed to evaluate and possibly refine its new proposal, including its plans to build an effective quality assurance system that can both detect and prevent inconsistencies in decisions. This information will help the agency to target its limited resources and take decisive steps to build a claims process that provides claimants with the accurate, consistent, and timely decisions they deserve, as envisioned in the Commissioner’s proposal. To move successfully forward with agency efforts to make more consistent decisions, including efforts incorporated in the Commissioner’s proposal for an improved disability claims process and quality assurance system, we recommend that SSA quickly expand its assessment of the areas and causes of inconsistency in decisions between adjudication levels. In doing so, SSA should consider making near-term and cost- effective enhancements to its current approach for assessing the consistency of decisions, including: 1. Reestablish ongoing analyses of case characteristics as part of its biennial case review, in line with efforts undertaken for the review report published in 1994. 2. Perform more sophisticated multivariate analysis on the biennial case review data in order to pinpoint the most significant case characteristics influencing allowance decisions and to distinguish factors that might be contributing either appropriately or inappropriately to allowance decisions. 3. Expand the biennial case review by requiring disability examiners and medical consultants to review the hearing tapes to ensure that reviewers have the complete case before them (including the types and sources of testimonial evidence provided during the hearings) when evaluating the ALJs’ decisions. 4. Have adjudicators and reviewers from each level study cases in depth to help pinpoint the causes of inconsistency, once potential areas of inconsistency between levels are identified. 5. Publish the methods and findings of all analyses, to keep internal and external stakeholders aware of the agency’s efforts to assess consistency and demonstrate improvement over time. 6. Use the information from these improved analyses to develop a more focused and effective strategy for ensuring uniform application of SSA’s guidance and to improve the consistency of decisions. To accomplish this, SSA should clarify guidance for making disability decisions and develop mandatory training for adjudicators on issues identified as contributing to inconsistency. We provided a draft of this report to SSA for comment. SSA expressed several reservations about the recommendations, findings, and conclusions of our report. Primarily, SSA took issue with: (1) our characterization of the agency’s progress over the past several years in analyzing and reducing the inconsistency of decisions, (2) our recommendation that the agency incorporate multivariate analysis into its assessments, and (3) our finding that the agency has not acted on the results of its reviews of decisions. SSA indicated that it would reevaluate our recommendations as the design of its Commissioner’s new approach to disability decision making evolves. However, the agency did agree to pilot one recommendation—that quality reviewers assess hearing tapes when evaluating the ALJs’ decisions—as part of a quality review. One of SSA’s main concerns was that our report did not fully discuss the progress SSA had achieved in analyzing and reducing the inconsistency in decision making between adjudication levels. For example, SSA commented that our report dismissed the 21-percentage point increase in the quality reviewers’ support rate of ALJ decisions, conducted as part of SSA’s biennial case reviews over the last 10 years. SSA also pointed to findings from its ALJ peer reviews as additional evidence that the quality and consistency of SSA’s decisions had improved. In addition, SSA asserted that its comparison of the relative proportion of allowances at the DDS and ALJ levels, along with high accuracy rates, indicated that adjudicators were making the right decisions sooner in the process—a goal of both process unification and the Commissioner’s new disability approach. Although our report incorporates results from the analyses cited by SSA, our conclusion about the improvement in consistency between levels is not as optimistic as SSA’s because of weaknesses in SSA’s assessments. As we reported, SSA’s analysis of the quality reviewers’ assessment of ALJ cases has been limited for 10 years to calculating ALJ support rates. SSA has not used available data to determine the potential areas of inconsistency between levels or the extent to which changes in the ALJ support rate is related to improvements in consistency of decisions between adjudication levels. SSA’s assessment also lacks a reliable method for determining whether DDS decisions are more consistent with ALJ decisions, for example, by having ALJs regularly review a statistical sample of DDS decisions. Lastly, as we pointed out, changes in the proportion of overall allowances made by the DDS and ALJ levels cannot serve as a reliable indicator for measuring the consistency of decisions between levels, because many factors can affect these proportions, such as significant fluctuations in the number of decisions made at each adjudication level. SSA also expressed its reservations about the benefits of multivariate analysis in its evaluation of decision making. SSA asserted that its analyses over the past 10 years have provided the agency with a solid understanding of how certain variables influence disability decision making and that the multivariate analyses we recommended would not identify the causes and effects of inconsistent decision making at different levels of this complex process. We agree with SSA that the disability decision-making process is complex and that multivariate analysis alone cannot establish all the causes and effects of inconsistent decision making. However, because multivariate analysis takes into account the influence of a number of relevant variables for each decision, this analytical technique can provide a more accurate understanding of areas and causes of inconsistency in decisions than methods previously employed by SSA. Such analyses, followed by in-depth case studies by adjudicators at both levels, which we also recommended, would bring SSA closer to understanding and resolving the inconsistency of decisions between adjudication levels. Therefore, we continue to believe that by performing the analyses we recommend, the agency will have a better understanding of the extent and causes of inconsistency, and that SSA’s Commissioner should quickly implement our recommendations to ensure that her new approach effectively addresses the consistency of decisions between adjudication levels. Finally, SSA disagreed with our finding that it has not acted on the results of its reviews of decisions. SSA noted that it has made changes to address training needs that have been identified by its reviews. Specifically, SSA indicated that it has provided a series of interactive video training (IVT) sessions focusing on problematic areas noted in the ALJ peer review reports. We acknowledge that SSA has conducted ALJ peer reviews and used findings from its reviews to develop and provide training to ALJs. However, we did not include these findings in our report, because our objectives were limited to reporting efforts undertaken by SSA to assess or improve the consistency of decisions between adjudication levels or to implement its process unification initiative. SSA’s ALJ peer review is conducted to identify problems with the quality of ALJ hearing process and decisions, not to identify problems with the inconsistency of decisions between levels. Conversely, our report included information on SSA’s ALJ pre-effectuation review, because it was part of SSA’s process unification initiative. According to information provided to us by SSA during our audit, although this review was intended to help identify policy and training areas that were associated with inconsistent decisions between adjudication levels, it was not effective at identifying any new areas to be pursued by the agency. This finding, along with those provided throughout the report, supports our recommendations to SSA that the agency perform additional analysis to determine the causes of potential inconsistency between adjudication levels and to clarify guidance and provide mandatory training to address any identified causes. In addition, SSA provided several other general and technical comments about the draft report. These additional comments, as well as our response to them, are provided in appendix II. Copies of this report are being sent to the Commissioner of SSA, appropriate congressional committees, and other interested parties. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-7215. Other contacts and staff acknowledgments are listed in appendix III. In designing my approach to improve the overall disability determination process, I was guided by three questions the President posed during our first meeting to discuss the disability programs. Why does it take so long to make a disability decision? Why can’t people who are obviously disabled get a decision immediately? Why would anyone want to go back to work after going through such a long process to receive benefits? I realized that designing an approach to fully address the central and important issues raised by the President required a focus on two over- arching operational goals: (1) to make the right decision as early in the process as possible; and (2) to foster return to work at all stages of the process. I also decided to focus on improvements that could be effectuated by regulation and to ensure that no SSA employee would be adversely affected by my approach. My reference to SSA employees includes State Disability Determination Service employees and Administrative Law Judges (ALJs). As I developed my approach for improvement, I met with and talked to many people—SSA employees and other interested organizations, individually and in small and large groups—to listen to their concerns about the current process at both the initial and appeals levels and their recommendations for improvement. I became convinced that improvements must be looked at from a system-wide perspective and, to be successful, perspectives from all parts of the system must be considered. I believe an open and collaborative process is critically important to the development of disability process improvements. To that end, members of my staff and I visited our regional offices, field offices, hearing offices, and State Disability Determination Services, and private disability insurers to identify and discuss possible improvements to the current process. Finally, a number of organizations provided written recommendations for changing the disability process. Most recently, the Social Security Advisory Board issued a report prepared by outside experts making recommendations for process change. My approach for changing the disability process was developed after a careful review of these discussions and written recommendations. As we move ahead, I look forward to working within the Administration and with Congress, as well as interested organizations and advocacy groups. I would now like to highlight some of the major and recurring recommendations made by these various parties. The need for additional resources to eliminate the backlog and reduce the lengthy processing time was a common theme. This important issue is being addressed through my Service Delivery Plan, starting with the President’s FY 2004 budget submission which is currently before Congress. Another important and often heard concern was the necessity of improving the quality of the administrative record. DDSs expressed concerns about receiving incomplete applications from the field office; ALJs expressed concerns about the quality of the adjudicated record they receive and emphasized the extensive pre-hearing work required to thoroughly and adequately present the case for their consideration. In addition, the number of remands by the Appeals Council and the Federal Courts make clear the need for fully documenting the administrative hearing record. Applying policy consistently in terms of: 1) the DDS decision and ALJ decision; 2) variations among state DDSs; and 3) variations among individual ALJs—was of great concern. Concerns related to the effectiveness of the existing regional quality control reviews and ALJ peer review were also expressed. Staff from the Judicial Conference expressed strong concern that the process assure quality prior to the appeal of cases to the Federal Courts. ALJs and claimant advocacy and claimant representative organizations strongly recommended retaining the de novo hearing before an ALJ. Department of Justice litigators and the Judicial Conference stressed the importance of timely case retrieval, transcription, and transmission. Early screening and analysis of cases to make expedited decisions for clear cases of disability was emphasized time and again as was the need to remove barriers to returning to work. My approach for disability process improvement is designed to address these concerns. It incorporates some of the significant features of the current disability process. For example, initial claims for disability will continue to be handled by SSA’s field offices. The State Disability Determination Services will continue to adjudicate claims for benefits, and Administrative Law Judges will continue to conduct hearings and issue decisions. My approach envisions some significant differences. I intend to propose a quick decision step at the very earliest stages of the claims process for people who are obviously disabled. Cases will be sorted based on disabling conditions for early identification and expedited action. Examples of such claimants would be those with ALS, aggressive cancers, and end-stage renal disease. Once a disability claim has been completed at an SSA field office, these Quick Decision claims would be adjudicated in Regional Expert Review Units across the country, without going to a State Disability Determination Service. This approach would have the two-fold benefit of allowing the claimant to receive a decision as soon as possible, and allowing the State DDSs to devote resources to more complex claims. Centralized medical expertise within the Regional Expert Review Units would be available to disability decision makers at all levels, including the DDSs and the Office of Hearings and Appeals (OHA). These units would be organized around clinical specialties such as musculoskeletal, neurological, cardiac, and psychiatric. Most of these units would be established in SSA’s regional offices. The initial claims not adjudicated through the Quick Decision process would be decided by the DDSs. However, I would also propose some changes in the initial claims process that would require changes in the way DDSs are operating. An in-line quality review process managed by the DDSs and a centralized quality control unit would replace the current SSA quality control system. I believe a shift to in-line quality review would provide greater opportunities for identifying problem areas and implementing corrective actions and related training. The Disability Prototype would be terminated and the DDS Reconsideration step would be eliminated. Medical expertise would be provided to the DDSs by the Regional Expert Review units that I described earlier. State DDS examiners would be required to fully document and explain the basis for their determination. More complete documentation should result in more accurate initial decisions. The increased time required to accomplish this would be supported by redirecting DDS resources freed up by the Quick Decision cases being handled by the expert units, the elimination of the Reconsideration step, and the shift in medical expertise responsibilities to the regional units. A Reviewing Official (RO) position would be created to evaluate claims at the next stage of the process. If a claimant files a request for review of the DDS determination, the claim would be reviewed by an SSA Reviewing Official. The RO, who would be an attorney, would be authorized to issue an allowance decision or to concur in the DDS denial of the claim. If the claim is not allowed by the RO, the RO will prepare either a Recommended Disallowance or a Pre-Hearing Report. A Recommended Disallowance would be prepared if the RO believes that the evidence in the record shows that the claimant is ineligible for benefits. It would set forth in detail the reasons the claim should be denied. A Pre-Hearing Report would be prepared if the RO believes that the evidence in the record is insufficient to show that the claimant is eligible for benefits but also fails to show that the claimant is ineligible for benefits. The report would outline the evidence needed to fully support the claim. Disparity in decisions at the DDS level has been a long-standing issue and the SSA Reviewing Official and creation of Regional Expert Medical Units would promote consistency of decisions at an earlier stage in the process. If requested by a claimant whose claim has been denied by an RO, an ALJ would conduct a de novo administrative hearing. The record would be closed following the ALJ hearing. If, following the conclusion of the hearing, the ALJ determines that a claim accompanied by a Recommended Disallowance should be allowed, the ALJ would describe in detail in the written opinion the basis for rejecting the RO’s Recommended Disallowance. If, following the conclusion of the hearing, the ALJ determines that a claim accompanied by a Pre-Hearing Report should be allowed, the ALJ would describe the evidence gathered during the hearing that responds to the description of the evidence needed to successfully support the claim contained in the Pre-Hearing Report. Because of the consistent finding that the Appeals Council review adds processing time and generally supports the ALJ decision, the Appeals Council stage of the current process would be eliminated. Quality control for disability claims would be centralized with end-of-line reviews and ALJ oversight. If an ALJ decision is not reviewed by the centralized quality control staff, the decision of the ALJ will become a final agency action. If the centralized quality control review disagrees with an allowance or disallowance determination made by an ALJ, the claim would be referred to an Oversight Panel for determination of the claim. The Oversight Panel would consist of two Administrative Law Judges and one Administrative Appeals Judge. If the Oversight Panel affirms the ALJ’s decision, it becomes the final agency action. If the Panel reverses the ALJ’s decision, the oversight Panel decision becomes the final agency action. As is currently the case, claimants would be able to appeal any final agency action to a Federal Court. At the same time these changes are being implemented to improve the process, we plan to conduct several demonstration projects aimed at helping people with disabilities return to work. These projects would support the President’s New Freedom Initiative and provide work incentives and opportunities earlier in the process. Early Intervention demonstration projects will provide medical and cash benefits and employment supports to Disability Insurance (DI) applicants who have impairments reasonably presumed to be disabling and elect to pursue work rather than proceeding through the disability determination process. Temporary Allowance demonstration projects will provide immediate cash and medical benefits for a specified period (12-24 months) to applicants who are highly likely to benefit from aggressive medical care. Interim Medical Benefits demonstration projects will provide health insurance coverage to certain applicants throughout the disability determination process. Eligible applicants will be those without such insurance whose medical condition is likely to improve with medical treatment or where consistent, treating source evidence will be necessary to enable SSA to make a benefit eligibility determination. Ongoing Employment Supports to assist beneficiaries to obtain and sustain employment will be tested, including a Benefit Offset demonstration to test to effects of allowing DI beneficiaries to work without total loss of benefits by reducing their monthly benefit $1 for every $2 of earnings above a specified level and Ongoing Medical Benefits demonstration to test the effects of providing ongoing health insurance coverage to beneficiaries who wish to work but have no other affordable access to health insurance. I believe these changes and demonstrations will address the major concerns I highlighted earlier. I also believe they offer a number of important improvements: People who are obviously disabled will receive quick decisions. Adjudicative accountability will be reinforced at every step in the process. Processing time will be reduced by at least 25%. Decisional consistency and accuracy will be increased. Barriers for those who can and want to work would be removed. Describing my approach for improving the process is the first step of what I believe must be—and will work to make—a collaborative process. I will work within the Administration, with Congress, the State Disability Determination Services and interested organizations and advocacy groups before putting pen to paper to write regulations. As I said earlier, and I say again that to be successful, perspectives from all parts of the system must be considered. Later today, I will conduct a briefing for Congressional staff of the Ways and Means and Senate Finance Committees. I will also brief SSA and DDS management. In addition, next week I will provide a video tape of the management briefing describing my approach for improvement to all SSA regional, field, and hearing offices, State Disability Determination Services, and headquarters and regional office employees involved in the disability program. Tomorrow, I will be conducting briefings for representatives of SSA employee unions and interested organizations and advocacy groups, and I will schedule meetings to provide an opportunity for those representatives to express their views and provide assistance in working through details, as the final package of process improvements is fully developed. I believe that if we work together, we will create a disability system that responds to the challenge inherent in the President’s questions. We will look beyond the status quo to the possibility of what can be. We will achieve our ultimate goal of providing accurate, timely service for the American people. 1. We maintain that our report fully and fairly describes SSA’s progress in analyzing and addressing the underlying causes of inconsistent decisions between state DDS examiners and ALJs. Our research included an extensive review of agency documentation and interviews with SSA officials, as well as stakeholder groups for adjudicators and claimant representatives, to develop a complete understanding of the agency’s efforts to assess and improve the consistency of decisions between adjudication levels. Also, in agreement with our requestor, we sought to expand the review to include SSA’s new approach to improving its disability programs, so that we could provide the Congress with an understanding of how SSA’s future plans may help to address this issue. 2. We provided information on the various reviews and analyses of disability decisions to assess the consistency of decisions between adjudication levels conducted by SSA over the last 10 years, but none of these reviews have clearly identified the causes of inconsistency in decisions between adjudication levels. 3. Our report has not overlooked the data cited by SSA. Nevertheless, our conclusion about the improvement in consistency between levels indicated by the data is not as optimistic as SSA’s because of weaknesses in SSA’s assessments. As we reported, for 10 years SSA’s analysis of the quality reviewers’ assessment of ALJ cases has been limited to calculating ALJ support rates. SSA has not used available data to determine the potential areas of inconsistency between levels nor the extent to which changes in the ALJ support rate are related to improvements in consistency of decisions between adjudication levels. SSA’s assessment only provides a general indication of overall changes in consistency at one adjudication level. 4. Our report recognizes that SSA’s disability decision-making process is complex. Because of this complexity, we believe that multivariate analysis is an appropriate assessment tool that would allow SSA to assess the effect of multiple factors. In recommending this sophisticated tool, we were careful not to imply that causes and effects of inconsistent decision making can be established with certainty. However, we believe that such an analysis will help SSA understand the relative importance of the variety of factors that affect its decision- making process. After identifying areas of inconsistency, SSA can target these areas with in-depth case analyses to pinpoint the causes of inconsistency and develop a more effective strategy for addressing inconsistency. On the basis of our review of SSA’s analyses to date, we do not agree with the implications of SSA’s comments that it has a solid understanding of how certain variables influence disability decision making, and therefore does not need to conduct additional, more sophisticated analyses. 5. We agree with SSA that the proportion of allowances made at each level can provide some insight into the allowance rate dynamic. However, as we reported, we do not believe that it can serve as a reliable indicator of the agency’s progress in achieving more consistent decisions between the DDS and OHA levels. The allowance data provided by SSA simply show that the relative proportion of allowances made at the DDS level increased in comparison with the OHA level, but SSA has not performed any additional analysis to show that these changes have any relationship to improved consistency in decision making between the two adjudication levels. Additional analysis is needed because a myriad of factors, such as changes in the economy, can affect allowance rates. Although SSA claims that over this period of time the economy has been “relatively stable,” without performing any additional analysis it cannot eliminate changes in the economy or demographics of claimants as an influence on the allowance rates at each level. In addition, SSA has not analyzed how other factors, such as changes in productivity and total number of decisions made at each level, may be influencing the allowance data. 6. The allowance rate data provided by SSA in its comments is very similar to that provided by SSA earlier to us and included in our report in figure 2. The figures we reported for the proportion of allowances made by the DDS and OHA levels for fiscal years 1997 and 1998 vary in comparison with those provided by SSA by one percentage point. We have not changed the figures in our report because we believe that these slight differences simply reflect that we reported data based upon fiscal, not calendar, years. 7. In our report, our statements that SSA has not made changes as a result of findings from its reviews were specifically related to SSA’s ALJ pre-effectuation review. We included information on this review because it was part of SSA’s process unification initiative and was intended to identify policy and training areas associated with inconsistent decisions between adjudication levels. During our review, we were told by an SSA official that the ALJ pre-effectuation review was not successful at identifying new areas of inconsistency to be addressed by SSA. In its comments, SSA cites a review unrelated to assessing the inconsistency of decisions between levels, the ALJ peer review, to assert that it has used reviews to identify training issues to improve the quality of decisions. The lack of success with the ALJ pre- effectuation review—along with other findings showing a limited understanding of the cause of inconsistency—supports our recommendations to SSA to perform additional analysis and to clarify guidance and provide mandatory training to address any identified causes of inconsistency between adjudication levels. 8. We applaud SSA’s plans to use the electronic disability system to capture critical management information to address decisional variance or inconsistency, which could provide a wealth of useful information for the agency. We have adjusted our report’s text to reflect this additional purpose. We continue to believe that SSA should not wait for the development of this system, but should proceed to perform multivariate analysis, using available data from its biennial case reviews, to start identifying areas of potential inconsistency between adjudication levels. 9. We applaud SSA’s deep commitment to improving the disability decision-making process, but believe that additional efforts to understand the causes of potential inconsistencies in decision making would help to inform the design of the Commissioner’s new approach and should, therefore, be undertaken immediately. 10. We generally agree with the technical comments provided and changed the text accordingly. In addition to the individuals mentioned above, the following staff members made major contributions to this report: Michael Morris, Corinna Nicolaou, Walter Vance, and Rebecca Woiwode. Douglas Sloane provided assistance with methodological issues, and Daniel Schwimer provided legal support.
Each year, about 2.5 million people file claims with the Social Security Administration (SSA) for disability benefits. If the claim is denied at the initial level, the claimant may appeal to the hearings level. The hearings level has allowed more than half of all appealed claims, an allowance rate that has raised concerns about the consistency of decisions made at the two levels. To help ensure consistency, SSA began a "process unification" initiative in 1994 and recently announced a new proposal to strengthen its disability programs. This report examines (1) the status of SSA's process unification initiative, (2) SSA's assessments of possible inconsistencies in decisions between adjudication levels, and (3) whether SSA's new proposal incorporates changes to improve consistency in decisions between adjudication levels. SSA has only partially implemented its process unification initiative. Although the agency initially made improvements in its policies and training intended to address inconsistency in decisions made at the two adjudication levels, it has not continued to actively pursue these efforts. Further, as part of this initiative, the agency implemented a review of hearings level decisions to identify ways to improve training and policies, but no new improvements were made as a result of the review. Finally, the agency began tests of two process changes intended to improve the consistency of decision making between the two adjudication levels. One test, which is ongoing, was not well designed and therefore will not provide conclusive results. The other test was abandoned because of implementation difficulties. SSA's assessments have not provided a clear understanding of the extent and causes of possible inconsistencies in decisions between adjudication levels. The two measures SSA uses to monitor inconsistency of decisions have weaknesses, such as not accounting for the many factors that can affect decision outcomes, and therefore do not provide a true picture of the changes in consistency. Furthermore, SSA has not sufficiently assessed the causes of possible inconsistency. For example, SSA conducted an analysis in 1994 that identified potential areas of inconsistency, but it did not employ more sophisticated techniques--such as multivariate analyses, followed by in-depth case studies--that would allow the agency to identify and address the key areas and leading causes of possible inconsistency. SSA has yet to repeat or expand upon this 10-year-old study. SSA's new proposal incorporates changes intended to improve consistency in decisions between levels. However, challenges may hinder its implementation. Most stakeholder groups for adjudicators and claimant representatives told us that a number of aspects of the proposal hold promise for improving consistency. These included one change, being tested as part of the process unification initiative, that requires state adjudicators to more fully develop and document their decisions, as well as several new changes, such as providing both adjudication levels with equal access to medical expertise. However, stakeholder groups also told us that insufficient resources and other obstacles might hinder the implementation of some changes. Adding to uncertainties about the proposal's overall success is its dependence on a new electronic folder system that would allow cases to be easily accessed by various adjudicators across the country. However, this technically complex project has not been fully tested.
TAPP was established originally by section 232 of the Small Business Administration Reauthorization and Amendments Act of 1990 (P.L. 101-574). In October 1991, Congress repealed the earlier authorization in section 609 of Public Law 102-140 and replaced it with the current program. Intended from the start to be a pilot program, the law authorized funding for 4 years, not to exceed $5 million a year. In mid-1994, the Congress decided that it would not reauthorize TAPP beyond fiscal year 1995. TAPP was modeled after Minnesota Project Outreach, a state program that provided small businesses with access to computerized databases and technical experts. Services for Project Outreach were provided under contract by Teltech Resource Network Corporation (Teltech), a Minnesota-based, national supplier of technical and business knowledge. The Minnesota program was regarded as a success in providing user-friendly services to small businesses that would not otherwise have the means or the ability to obtain needed technical information. Its success provided the stimulus for the TAPP legislation. The law made three agencies responsible for administering TAPP. The Small Business Administration (SBA) was authorized to make grants to competing Small Business Development Centers (SBDC), which had to obtain matching contributions at least equal to the awards. SBA was to coordinate with the National Institute of Standards and Technology (NIST) and the National Technical Information Service in establishing and managing the program. According to NIST officials, only SBA and NIST took an active role in program administration because the National Technical Information Service is an agency whose primary role is to collect and disseminate scientific, technical, engineering, and business-related information generated by other federal agencies and foreign sources. In early 1991, NIST and SBA signed a memorandum of understanding that resulted in NIST’s implementing TAPP on behalf of and in close cooperation with SBA. SBA administers TAPP through its Office of Small Business Development Centers, which is responsible for setting policies, developing new approaches, monitoring compliance, and improving operations for the SBDCs. NIST manages and monitors TAPP through its Manufacturing Extension Partnership (MEP), a network of organizations to help American manufacturers increase their competitiveness nationally and internationally through ongoing technological deployment. The SBDCs, which provide counseling and training to existing and prospective small businesses, were chosen as the local level through which TAPP services would be provided. As of July 1994, there were SBDCs and subcenters at 750 geographically dispersed locations nationwide, as well as Puerto Rico and the Virgin Islands. Counselors at the SBDCs are knowledgeable in the needs of small businesses and are experienced in working with them. The first TAPP grants were made for fiscal year 1992 and went to SBDCs in Maryland, Missouri, Oregon, Pennsylvania, Texas, and Wisconsin. Oregon dropped out of TAPP after fiscal year 1993 when it was not able to obtain matching funds; however, it has continued to operate without federal funding on a reduced scale. The remaining five centers continued to receive TAPP funds through fiscal year 1995. As shown in appendix II, federal grants to the six TAPP centers for the 4 years of the program totaled $3,537,000. While the centers have differed somewhat in the way they chose to deliver services, the basic model for each center is the same. First, the center offers its clients access to a variety of on-line databases. These databases cover technical areas such as product development, patents, and manufacturing processes as well as nontechnical areas, such as market research and vendor listings. Secondly, the center links the clients with experts who can provide specific assistance. Typically, services are provided for free or at a nominal charge and may be augmented by other SBDC programs and services. Appendixes IV through IX describe each of the current and former TAPP centers. In our first report on TAPP, we raised concerns about the evaluation methodology for measuring the program’s impact. Although NIST subsequently identified a strategy to address these concerns, this issue is now moot because the program will not be funded past fiscal year 1995. (See app. III.) In our first report, we noted that TAPP had started slowly and that some of the centers, while making progress, were not operating in accordance with the statements of work in their proposals. This is no longer the case. In the program’s fourth and final year, each of the five centers still in the program is fully operational. While the centers differ in some important respects, in many ways they have become more nearly alike in the types of services offered and the methods of delivering them. SBA and NIST have not evaluated the impact on small business productivity and innovation either nationwide or within the individual states where TAPP centers were located. According to the limited responses to client satisfaction surveys, however, the businesses that used TAPP services were pleased with the services they received. Also, TAPP center officials were pleased with the way their individual programs had developed and provided examples of projects that had been successful. At the time of our review, each of the TAPP centers planned to continue its program beyond fiscal year 1995. However, most officials within the centers were uncertain about how they would be organized, what services they would provide, or where they would obtain funding. Currently, the TAPP centers primarily serve clients with a need for new technology, many of whom are just getting started in business. Overall, the five TAPP centers still in the pilot program served approximately 1,840 clients in fiscal year 1994, ranging from 230 in Missouri to 445 in Wisconsin. According to Nexus Associates, a NIST consultant, 59 percent were manufacturers, 21 percent were service companies, 14 percent were wholesale and retail companies, and 7 percent represented other segments of the small business community. Forty percent of the clients had not yet established a business, and another 26 percent were involved in new ventures. While there were some “repeat” clients, 89 percent undertook only one project during the period. The five centers responded to 2,843 information requests during fiscal year 1994, ranging from 283 in Missouri to 847 in Pennsylvania. According to Nexus Associates and as shown in table 1, these projects were evenly divided between technical and nontechnical information, although there were differences among the centers. A more detailed breakdown of the services showed an emphasis on product or process information and market research. Database searches, rather than the use of technical experts, represent the primary type of service provided by the TAPP centers. As shown in table 2, for example, 65 percent of the projects in fiscal year 1994 were for literature searches. Only 9 percent of the projects were for expert and/or technical counseling. The impact TAPP has had on business productivity and innovation cannot be measured because there are no substantive data. Moreover, because NIST cancelled its plans for evaluating the program’s impact after funding was discontinued, no such determination will likely be made. NIST continues to collect data on client satisfaction; however, the surveys are of limited value because of the low response rate. For example, in fiscal year 1994 the response rate of the clients surveyed ranged from a low of 9 percent in Pennsylvania to a high of 46 percent in Wisconsin. According to an analysis by Nexus Associates, those clients that did respond to the satisfaction survey for fiscal year 1994 indicated a high degree of satisfaction with TAPP services. The vast majority of those responding ranked the services they received as “good” to “excellent” and would recommend TAPP to other companies. Similarly, more than 90 percent of the respondents said that their requests for assistance received prompt attention. More than 80 percent said that the representatives who assisted them possessed the necessary skills. The overwhelming majority of the clients rated as “good” to “excellent” the helpfulness of the representatives and the relevance, currentness, and conciseness of the information received. The estimated value of the services provided varied widely among the centers and their clients. The median value, according to the clients’ estimates in their survey responses, ranged from $101 to $150 among the centers; however, 19 percent of the clients responding to the survey placed a value of more than $500 on the services they received. Those clients valuing the services at more than $500 tended to (1) be new businesses, (2) focus on expert searches rather than vendor searches, and (3) request market research information rather than management or vendor information. Two-thirds of the clients responding said that they were unlikely to have been able to obtain the information they received without TAPP. However, the level of satisfaction depended on the type of information requested. For example, while the majority of companies receiving patent information believed they could have received the information elsewhere, the majority of companies receiving management or vendor information believed it was unlikely they could have found this information elsewhere. Officials at the five centers still participating in TAPP told us they were satisfied with the programs they had developed and believed that they were providing valuable services to their client businesses. While they could provide no statistics on the overall impact, they did provide examples of projects perceived as successful, such as the following: An environmental services company in Missouri feared it was infringing on an existing U.S. patent for monitoring gasoline contamination of groundwater around service stations and storage tanks. As part of an overall action plan, the TAPP center conducted a search of the technology that predated the patent. The company resolved the issue and was able to continue to market its services to test for leaks from storage tanks. TAPP center personnel also referred the company to other SBDC personnel who were able to assist it in preparing three Small Business Innovation Research project proposals to SBA. A Wisconsin manufacturer risked losing a major customer because the liquid crystal displays it was making were breaking too easily. Through a literature search by the TAPP center, the manufacturer identified a number of new databases and obtained information that it subsequently incorporated into its product improvement process. The company believes that the information helped it save an account worth approximately $2 million over a 2-year period. A Maryland software company specializing in adaptive network systems wanted to expand into markets beyond the airline industry it originally had targeted. The TAPP center performed a literature search for firms that were purchasing or producing financial yield predictive software. The company was then able to identify and begin to market its products to two financial services companies that had advertised in trade journals their need to obtain revenue management tools. According to TAPP center officials, there was a learning curve associated with developing their individual programs. They provided the following examples of some of the factors with which they had to deal: Technology must be “pulled by” rather than “pushed upon” the clients. Unlike large corporations, small business owners typically have limited budgets, time, and expertise. Technology is of little benefit to them in the abstract and must have practical applications that can be adapted to the marketplace. Thus, technology is best integrated when a center can provide assistance throughout the various stages of a product’s development or delivery. Promotion is essential because small business owners may not know that they need or can use the technology available. The centers must promote their services through such methods as advertisements in trade publications and seminars. A center’s services must be integrated into those of the SBDC. One of the challenges facing the TAPP centers has been internal promotion (i.e., getting other SBDC staff—whose focus has been toward business planning—to see the advantages of TAPP’s technical assistance services so that they can encourage small business owners to use them). Because officials at each of the five TAPP centers still in the program believed their services were a valuable addition to the types of assistance the SBDCs provide, they said they planned to continue them after federal funding ends in fiscal year 1995. Because they did not know whether or how they would replace the federal funds, however, they were not certain how their programs would be organized or whether they would be able to provide the same level of services. While federal funding for TAPP will be discontinued after fiscal year 1995, the interest in programs providing technical assistance to small businesses continues. Thus, it is possible that the Congress may reconsider the need for similar types of federal programs in the future. If so, the lessons learned under the pilot program could be useful. From analyzing 4 years of TAPP funding and operations, we believe the following questions need to be considered prior to funding any future program: What are the program’s specific objectives? Is a separate and distinct federal program necessary to achieve these objectives? How should the program be financed? While the authorizing legislation stated an ultimate goal for TAPP—increasing the innovativeness and competitiveness of small businesses through improved technology—it did not specify what level of increase was desired or how results could be measured. The law did say that the purpose of the program was “increasing access by small businesses to on-line databases that provide technical and business information, and access to technical experts, in a wide range of technologies...” However, it did not define these terms nor did it specify which, if any, segments of the small business community were to be targeted. From the beginning, NIST and the SBDCs differed on the objectives and scope of TAPP. As noted in our earlier report, NIST was concerned that the services provided had too much of a marketing, rather than a technical, orientation and that many TAPP clients were small, local, retail businesses rather than technical or manufacturing concerns. NIST officials had hoped that, while there was no such requirement in the law, eventually 50 percent of the information provided by TAPP centers would be technical in nature. Taking a broader view of technology in the context of TAPP, SBDC officials said that an underlying objective always must be the continued viability of the firms seeking assistance. These officials maintain that it is important not just to disseminate pure technology but also to encourage all businesses to take advantage of whatever technical information is available. This may mean using TAPP databases to obtain marketing information heretofore unavailable to them. The issue seems to have resolved itself within the current program. Projects during fiscal year 1994 were evenly divided between technical and nontechnical information, according to Nexus Associates. NIST officials said they were pleased with the progress the centers had made toward giving TAPP a more technical focus. TAPP was not a new idea; technology assistance programs for small businesses have been available for some time. For example, both the Missouri and Pennsylvania SBDCs already had limited programs that were similar to TAPP in place when they received TAPP grants. Other states, such as New Mexico and North Carolina, have developed “technical” SBDCs on their own to promote and enhance technology transfer. Minnesota’s Project Outreach, which was the model for TAPP, has never received federal funding. Teltech is a private company that has provided technical services under contract to other organizations—including Project Outreach and TAPP centers—on a fee-for-service basis. Generally, the SBDCs appear to agree that they should offer technical assistance to their clients and have begun to establish programs. In a 1991 survey of 56 state SBDC directors conducted by the Association of Small Business Development Centers, 42 directors (75 percent) said they were providing “client-assisted access to databases.” About 60 percent of the SBDCs were providing this service themselves, while the rest were referring their clients to some other organization on an informal or contractual relationship. Eighty-eight percent of the SBDC directors responding to the survey said they were assisting clients in identifying experts who could respond to technical questions. However, only 23 percent of the SBDCs were providing this service on their own; the remainder referred clients to other organizations on an informal or contractual relationship. The survey respondents also noted that they had made a long-term commitment to technical assistance programs. Thirty-three states or areas planned to expand their technology transfer and/or development services, including enhanced access to technical databases. Thirty-six states made capital available for research and development, new product development, and access to technology. Technology assistance is also being provided to small businesses under federally sponsored programs other than those administered by the SBDCs. One example is the Manufacturing Technology Centers (MTC) NIST helped establish as a part of its MEP network. MTCs are regionally located and managed centers for transferring manufacturing technology to small and midsized manufacturing companies. MTCs use a wide variety of technology sources, including commercial firms, federal research and development laboratories, universities, and other research-oriented organizations. MTCs differ from the current TAPP centers in that they are regional in nature, focus solely on pure technology, serve only manufacturers, and work with the same clients on an ongoing basis. However, an MTC can provide the same services to a manufacturing client that a TAPP center can provide. In fact, Minnesota’s Project Outreach, which was the model for TAPP, is now a part of an MTC in the state. Federal appropriations for the TAPP program over its 4 years totaled $3.5 million—far less than the $20 million authorized. As shown in appendix II, none of the centers received more than $200,000 in any one year. Actual budgets were larger, of course, because the law required matching funds. SBDC officials agreed with our observation that the TAPP funding allowed them to create and operate dedicated technology-assistance programs that might not have been possible otherwise. One advantage was that the funding covered the start-up costs of the centers. During the first 2 years of the program, there was a considerable learning curve as the centers established their programs, developed a service mix, and promoted themselves to potential users. Another advantage was that the funding allowed the centers to provide services at little or no cost to prospective clients. The SBDC officials believed that this gave the centers the capability to offer a wider range of services and to serve more businesses. The TAPP law envisioned technology-assistance centers within the SBDCs that eventually would be at least partially self-sustaining. For example, the law gave as one of the selection criteria “the ability of the applicant to continue providing technology access after the termination of this pilot program.” The law also encouraged the TAPP centers to try to obtain funds from other federal and nonfederal sources. In practice, most of the support came from the TAPP funding itself, the SBDCs, the states, or the educational institutions with which the centers were affiliated. One option for funding a technology-assistance program is for the program to charge businesses a fee for the services they use. This is one reason the Oregon center has been able to operate after TAPP funding ended. During the program’s first 2 years, the Oregon center received a total of $325,000 in TAPP funds plus matching state funds. Since the end of fiscal year 1993, however, the center has relied on donations and client fees to operate. Currently, clients are charged $30 an hour plus on-line expenses. According to Oregon center officials, clients pay an average of approximately $114 per search. During the TAPP years, client fees averaged about $10 per search. In 1994, fees totaled about $7,500, or 19 percent, of the Oregon center’s budget of $40,000. Its director believed that, in some ways, the center improved after it began to be self-supporting because clients took them more seriously and were more cautious about the services they requested when they had to pay for them. At the same time, the Oregon center has had to scale down its operations now that it no longer receives federal grants and matching state funds. While Minnesota’s Project Outreach receives the bulk of its funding from state appropriations, it also charges a fee for services. For example, “client companies,” which can access services directly, must pay an annual fee based on sales as well as a fee for certain services. An expert consultation, literature search, or vendor search costs a client company $35 per use. There is no annual fee for “public access users,” who can obtain services through remote terminals across the state. However, there is a higher charge for services, such as $50 for a consultation, an interactive literature search, or a vendor search. In some cases, such as gaining access to certain information on the University of Minnesota’s databases, there is no charge to either type of user. The five TAPP centers still receiving federal grants in fiscal year 1995 had not generated any significant revenues by charging fees for services. Generally, the services were either offered to clients for free or for a fee well below what they would have cost if purchased from a private vendor. This was intentional because the centers used their free and low-cost services to attract clients who might benefit from their technical assistance. While some centers were considering fee-for-service arrangements as one possibility for funding services after the end of TAPP funding, they had not yet finalized any plans. In its fourth and final year of funding, TAPP is fully operational in the five states still participating in the program. Each of the five states as well as Oregon—which dropped out of the program after fiscal year 1993—plan to continue on some level. However, the states are not certain how the centers will be organized, what services will be provided, or where funding will be obtained. NIST officials are no longer concerned that the TAPP centers are focusing on marketing rather than technical services. Data from fiscal year 1994 indicate that about half the services being provided were of a technical nature, which is the ratio NIST envisioned at the program’s inception. Moreover, 59 percent of the users were manufacturing companies. Generally, both the users and the SBDCs were pleased with the services being provided and the results achieved. Because the Congress has decided not to extend TAPP funding past fiscal year 1995, we identified no issues that need to be addressed on the current program. If the Congress decides to fund a program similar to TAPP in the future, it may wish to consider some of the lessons learned, or issues that emerged during the pilot program. These include (1) adding more specificity to the objectives and goals of the program; (2) determining whether a separate and distinct federal program is needed and, if so, what type of organization is best suited to manage it; and (3) deciding how the program should be funded, including charging user fees for the services provided. A draft of this report was sent to both SBA and the Department of Commerce for comment. In its written comments, SBA generally concurred with the findings and conclusions in our draft report. (See app. X.) Commerce, whose comments are included in appendix XI, said that the report (1) contained information which incorrectly characterized TAPP, MEP, and the role of NIST in implementing TAPP and (2) did not provide an adequate context from which to determine the lessons learned from TAPP and how those lessons fit into an overall concept of technical assistance. Specific issues related to Commerce’s two concerns are discussed below. Commerce disagreed first with our characterization of the emphasis NIST placed on the technical orientation of the TAPP centers. For example, Commerce disagreed with our use of the term “scientific information” in describing the types of services NIST wanted to emphasize under TAPP and asked that we use the broader description “technology and technical information.” Commerce also said that NIST officials had never set a 50-percent goal for such services but rather had sought a “balance” in technical and nontechnical services compared to marketing services. We agree with Commerce’s clarification that NIST wanted a technical, and not just scientific, orientation for TAPP and have revised our report accordingly. We disagree, however, that NIST did not set a 50-percent goal for such services, as NIST and TAPP center officials discussed this goal with us during our work on both the interim and current reports. Secondly, Commerce believed the report mischaracterized NIST’s evaluation efforts regarding TAPP. For example, Commerce disagreed that NIST had “cancelled” its evaluation plans, as we had noted in our report. Instead, Commerce asserted that NIST had revised its evaluation methodology. Commerce also said the report improperly characterized Nexus Associates as a NIST consultant on TAPP when Nexus actually was a subcontractor to the University of Houston’s SBDC. Commerce also believed that the report did not elaborate sufficiently on the problems associated with evaluating TAPP. Commerce pointed out that there are no models that could be used to establish a clear correlation between the information provided by a TAPP center and increased productivity and innovation as well as other positive economic indicators. According to Commerce, the key determinant is not the information provided but what is done with that information. Developing proper models would require follow-up over a period of years with clients who are willing to share continuing and potentially sensitive feedback on how the information is being used and what changes it has generated in the clients’ operations. Furthermore, Commerce said that we had previously agreed to fund and develop a survey that met our impact evaluation needs, as well as those of NIST and the TAPP centers. We disagree with Commerce’s assertion that NIST did not cancel its evaluation plans for TAPP. The discussion of this issue in our report focused on the evaluation of program impact. While NIST has continued to evaluate the program by collecting data from client surveys, we do not believe that these surveys address program impact. We have clarified this issue in our report. We also disagree that we mischaracterized the role of Nexus Associates. While Nexus was funded through the University of Houston’s SBDC, it performed analyses of programwide information, was referred to as a TAPP evaluation consultant by NIST officials, and presented its analyses to NIST. We agree with Commerce’s comments on the problems inherent in evaluating TAPP. We made this point in the interim report when we stated that “the data needed to evaluate the effectiveness of the program are not yet available and may not be available for some time.” We also stressed this point in November 1994 correspondence with the congressional committees when we agreed that the focus of this report should be on the lessons learned from TAPP. Contrary to Commerce’s comments, we did not agree to fund and develop the survey instrument. As a third concern, Commerce said that the report needed to provide a better context for how the lessons learned under TAPP fit into the overall concept of technical assistance. Commerce believed that the most important question that we raised in considering future needs is whether a separate and distinct federal program, such as TAPP, is necessary. Commerce said that the types of services provided by TAPP are not “stand-alone” services and that they must be considered within the broader context of services available under MEP. While we agree with Commerce on this point, such an analysis was beyond the scope of this report. Finally Commerce questioned the report’s characterization of MEP. Commerce noted that MEP supports American manufacturers nationally and internationally through ongoing technological deployment, not through technological development as stated in the report. Similarly, Commerce believed the report did not go far enough when it said that an MTC can provide the same types of services to manufacturers that a TAPP center could provide to SBDC clients. Commerce said that MEP’s manufacturing extension center organizations, of which the MTC is one type, actually can provide more such services. We agree with Commerce’s comments on the role of MEP and revised the report to say that MEP supports manufacturers through technological deployment. Also, we do not question that MEP may be able to provide more services to its clients than a TAPP center. We made no revisions to the report, however, as our point was to show that there are other organizations providing the same types of services as TAPP, rather than to compare the quality or quantity of the services provided. We conducted our work between August 1994 and June 1995 in accordance with generally accepted governmental auditing standards. We are sending copies of this report to the appropriate congressional committees; the Secretary of Commerce; the Administrator of SBA; and the Director, Office of Management and Budget. Major contributors to this report are listed in appendix XII. Please contact me at (202) 512-3841 if you or your staff have any questions. Public Law 102-140, enacted October 28, 1991, required GAO to issue two reports on the Pilot Technology Access Program (TAPP). The first, or interim, report was to discuss the program’s implementation and progress. We issued our first report on March 7, 1994. The second report was to determine the program’s effectiveness and impact on improving small business productivity and innovation. Prior to our beginning work on the second report, we learned that the Congress did not intend to fund TAPP beyond fiscal year 1995. Therefore, we met with the authorizing committees to determine what work was needed to meet the legislative mandate and to provide the Congress with information it might be able to use on similar programs in the future. We agreed to report on the experiences of and lessons learned by the TAPP centers during the pilot program. To carry out our objectives, we first met with the federal officials responsible for the management and the oversight of the program. These consisted of officials within (1) the Office of Small Business Development Centers (SBDC) in the Small Business Administration (SBA) and (2) the Manufacturing Extension Partnership (MEP) of the National Institute of Standards and Technology (NIST). We reviewed pertinent documents maintained by these agencies, including reports filed by the individual TAPP centers. We also reviewed materials prepared by a NIST contractor, Nexus Associates. We visited each of the five TAPP centers still in the program in fiscal years 1994 and 1995. These centers were located in SBDCs in Maryland, Missouri, Pennsylvania, Texas, and Wisconsin. We also visited the center in Oregon, which dropped out of TAPP after fiscal year 1993. At each location, we reviewed budgets, reports, and other materials and talked with key officials within the TAPP center and the SBDC. We also met with clients to obtain their perspectives on the TAPP services they had received. For comparison purposes, we visited Project Outreach in Minnesota, which was the model for TAPP; a technical SBDC in North Carolina; and a Manufacturing Technology Center in South Carolina. At each of these locations, we obtained an overview of the organization and services, met with key officials, and reviewed background documentation. We also talked with other persons who had background information on the technology needs of small businesses. These included the Association of Small Business Development Centers and two national associations that deal with small business issues. We asked both SBA and the Department of Commerce to provide comments on a draft of this report. SBA’s written comments are included in appendix X, and Commerce’s written comments are included in appendix XI. We incorporated their comments where appropriate. Also, we discussed the information included in the appendixes about each TAPP center with appropriate center officials. The law authorizing TAPP required GAO to issue two reports on the program. The first, or “interim,” report was to address the implementation and progress of the program. A “final” report was to evaluate the effectiveness of the program in improving small business productivity and innovation. On March 7, 1994, we issued our first report on TAPP entitled Federal Research: Interim Report on the Pilot Technology Access Program (GAO/RCED-94-75). In this report, we discussed the implementation of the six centers that had been established and concluded that it was too early to determine their impact on small businesses within their states. However, we did raise concerns about the evaluation methodology NIST had developed to measure such effects and the difficulties inherent in trying to link the information being provided with improving productivity. NIST had not attempted to develop an evaluation plan during the program’s first year, when the centers were in the process of getting established. In March 1993, during the second year, NIST asked the centers to conduct a postcard survey similar to one used by the Maryland center. This survey asked clients using TAPP services (1) if they had received the information they needed, (2) if they had used the information for making business decisions, (3) what type of information was most useful, (4) if they would use the program in the absence of a subsidy, and (5) what prices they would consider paying for TAPP services. However, this attempt at evaluation had little effect because (1) only 60 clients were surveyed in Maryland and only 47 responded; (2) only three other centers conducted surveys; and (3) the other surveys did not ask the same questions, making comparisons among the centers impossible. As a part of the fiscal year 1994 proposal process, NIST encouraged the centers to develop a standard client evaluation methodology. This would include three survey questionnaires of clients. The first would be a questionnaire on client satisfaction that would be distributed to clients immediately after a service was provided. The second questionnaire would ask about the impact of the service 6 months later. The third would ask clients how the service had affected the client’s competitive position in the market place a year after receiving the service. In our first report, we raised questions about the reliability of the data that would be obtained through the use of these questionnaires. We said that the questions were not clear or precise, did not make a connection between program impact and increased productivity, and failed to ask basic questions regarding client satisfaction with the program. We concluded that we had little confidence the questionnaires in their current form could be used to measure a center’s effectiveness, particularly considering the anticipated low response rate. In response to our first report, the Secretary of Commerce informed us in May 1994 that NIST planned to change its approach with the evaluation questionnaires. The changes would consist of (1) improving the initial client-satisfaction questionnaire; (2) eliminating the other two questionnaires to reduce the burden on TAPP clients; (3) replacing the two questionnaires that were dropped with a new survey instrument that better suited the requirements of GAO, NIST, and the TAPP centers; and (4) developing an analytic report of the data already being generated by the program. TAPP funds would be used to hire a consultant to develop the analytic report. After learning that TAPP was not going to be funded past fiscal year 1995, NIST officials decided against pursuing most of the evaluation plans it had set out. Instead, the TAPP centers were instructed to use only the initial client-satisfaction questionnaire. Also, NIST provided the University of Houston with funding for a contract with Nexus Associates, Inc., to develop an analytic report using data the program generated. Nexus Associates already has prepared a presentation using statistics from reports the centers submitted and the results of the client evaluation survey for fiscal year 1994. In adition, NIST plans to have Nexus Associates critique the other two questionnaires originally intended to provide NIST with information it could use to plan evaluations of future programs. The Maryland Technology Expert Network (TEN) is a part of the Manufacturing and Technology SBDC located at and affiliated with the University of Maryland in College Park. TEN offers small business clients both on-line and off-line services in the form of literature searches, intellectual property searches, expert consultations, and document delivery. These services are used to complement other services offered these same clients by the SBDC. While TEN has been a TAPP participant from the beginning, it has evolved over the years into its current configuration. For the first 3 years, services were provided by Teltech Resource Network Corporation (Teltech) under an exclusive contract. This contract was not continued in fiscal year 1995 because SBDC officials believed they could provide the necessary services in-house at a lesser cost and because they were seeking ways to become self-sustaining after the end of TAPP funding. Instead, the SBDC has contracted with the University of Maryland’s College of Library and Information Services (CLIS), which provides essentially the same database services at a reduced cost. More than 90 databases in a variety of subjects are accessible through the university’s library system. The SBDC also has access to experts associated with the university as well as external contacts. TEN focuses on serving small manufacturing firms, technology companies, and technology-related service companies, such as systems integrators and environmental service companies. TEN informs potential clients of its services through (1) personal contact with SBDC clients; (2) newsletters of various trade organizations and state economic development agencies; (3) targeted mailings; and (4) training events, seminars, workshops, and conferences. TEN has two key personnel that are responsible for its operations. The SBDC State Director provides program oversight while other SBDC staff inform clients of TEN services through their own counseling activities. Clients can access the center through any one of 28 locations throughout the state. TEN personnel have developed the TEN Information System (TENIS), an automated management information system to gather and report evaluation data; process client-tracking statistics; and produce monthly reports on clients by access site, counselor, and date. TENIS is also used to control client invoice information to ensure timely collection of fees. TEN personnel are primarily intermediaries between the client and the database vendor. Upon receipt of a client’s request for a database search, the request is entered into TENIS and forwarded to the vendor. The vendor conducts the search and sends the results to TEN, which delivers them to the client. Search results are typically given in conjunction with business consulting services. Maryland was not among the original states selected for TAPP in fiscal year 1992, the program’s first year. Upon review, NIST and SBA determined that Maryland would be a good site for the program because of a large concentration of high-tech companies and several government research and development locations in the state. Maryland was added to the program at a reduced level of federal funding—$50,400 compared to $200,000 for each of the other centers. TEN subsequently received $50,000 in fiscal year 1993, $170,000 in fiscal year 1994, and $140,000 in fiscal year 1995. TEN has received matching funds from the state, resulting in total state and federal funding of $887,754 over the life of the program. To supplement the funds available for its services, TEN has implemented a client fee structure. Initial searches are free, but the next four searches each require a $25 fee for remote literature, patent, and vendor searches and a $50 fee for expert consultations and literature searches. Clients are charged the market rate for the sixth and subsequent searches. As shown in table IV.1, TEN served many segments of the small business community during fiscal year 1994. The 336 clients served represent an increase of 65 percent over fiscal year 1993. The greatest areas of concentration were in the service and manufacturing segments, which accounted for 82 percent of the clients served. As shown in table IV.2, TEN responded to a total of 627 requests for database information during fiscal year 1994, an increase of 84 percent over 1993. Forty-one percent of these requests were of a technical nature. Legal (patents and/or regulations) TEN currently attempts to measure client satisfaction and program impact through a survey mailed to the client after a service has been provided. This survey requests information on the quality of customer service, the quality of information received, the accessibility of information outside of TEN, the dollar value of information received, and the type of information most critical to the client. The response rate for the fiscal year 1994 survey was 39 percent. Client responses were generally positive. In summary, users found the information from TEN to be very helpful, relevant, and current. Thirty-one percent rated the value of the information at $500 or more and 96 percent said they would recommend the services to others. TEN uses client interviews as another form of data collection. The interviews are conducted some months after a client’s use of TEN to determine its valuation of the economic impact of TEN service. Although few interviews have been conducted to date, TEN plans to begin client interviews on a larger scale in the third quarter of 1995. SBDC officials were pleased with the performance of TEN and planned to continue the program after the termination of TAPP funding. By using services available through CLIS, TEN is transitioning to a state-sponsored program by providing services with instate resources and some combination of state funding, user fees, and corporate sponsorships. The total amount budgeted for the fiscal year 1995 CLIS contract is $63,636. This figure includes $40,295 to cover such fixed costs as salaries, equipment, and on-line subscriptions; and $23,341 to cover such variable costs as supplies, telecommunications, expert consultations, and on-line searches. According to SBDC officials, the new arrangement will have limitations. First, CLIS does not have a well-established and extensive database of technical experts from which to pull resumes. Thus, while TEN can identify experts through CLIS, its database is not as extensive as with Teltech. With time, TEN hopes to develop its own database of experts. Second, interactive searches are not as accessible by staff in the field as they were with Teltech. Interactive searches are now only conducted through the Manufacturing and Technology SBDC in College Park and to a lesser extent in Baltimore. The Missouri Technology Access Program (MOTAP) is a part of the Missouri SBDC and is affiliated with the University of Missouri in Columbia, the University of Missouri in Rolla, and Central Missouri State University in Warrensburg. MOTAP offers small business clients both information services and technical assistance in the form of literature searches, patent searches, expert consultations, and document delivery. These services complement other services the SBDC offers to other clients. MOTAP is a coordinated effort between staff located at the three university campuses. The Missouri SBDC, located on the Columbia campus, houses the marketing component of MOTAP. The Technology Search Center in Rolla and the Center for Small Business Technology and Development in Warrensburg house the technical search capabilities. The Missouri SBDC State Director in Columbia provides management oversight for MOTAP. MOTAP targets the manufacturing community. MOTAP informs potential clients of its services through (1) training events, (2) seminars aimed at the manufacturing community, (3) relationships with network partners who inform their clients about MOTAP, and (4) newsletters and targeted mailings. MOTAP also markets the program internally to SBDC counselors to inform them of its services. The Missouri SBDC offered its clients on-line database searches and access to technical experts prior to federal TAPP funding. With TAPP funding, the SBDC hired two additional persons—one to conduct marketing database searches and one to provide technical assistance. TAPP funds increased the capabilities of existing SBDC functions and added the capability to provide marketing assistance. Six people participate or are involved in the MOTAP marketing information search function in Columbia. A marketing specialist devotes 75 percent of his time to MO TAP and is supported by two research associates who devote 33 and 25 percent of their time to the program respectively. Three other persons handle programming and administrative functions. Nine people perform the technical support function in Rolla and Warrensburg. Included are a technical project manager and a technology transfer coordinator who devote 76 and 25 percent of their time to the program, respectively. The remainder of the staff includes university faculty, a consulting engineer, and administrative support personnel. Other SBDC staff also provide assistance by informing clients of MOTAP services through their own counseling activities. Clients may access MOTAP through any one of 12 regional SBDC locations, 17 university extension locations, or 2 special service centers. The methods by which MOTAP services are provided may vary depending on the circumstances. Information services range from single answers to specific questions to lengthy “information counseling” projects that provide clients with information on a broad topic or opportunity. Such projects can involve multiple database searches, extensive data processing, and compiling reports. Technical assistance also varies from one-time answers to in-depth analyses of processes or problems by technical experts, student teams, field engineers, etc. MOTAP staff at the three campus locations must coordinate their efforts to provide a complete package of marketing and technical services to their clients. For example, if the staff in Rolla performed database searches for market and patent information, this could lead to follow-on services provided by the staff in Warrensburg who provide assistance in developing prototypes, identifying manufacturing facilities, patenting advice, licensing contacts, and other technical services at no cost or on a cost-recovery basis. MOTAP has been a part of TAPP since it began in fiscal year 1992 and has received $700,400 over the life of the program. This includes $200,000 in fiscal year 1992, $190,400 in fiscal year 1993, $170,000 in fiscal year 1994, and $140,000 in fiscal year 1995. MOTAP has received matching funds from the state for each of these years, resulting in a total state and federal funding of $1,419,130 over the life of the program. MOTAP also has collected a total of $24,242 in client fees. As shown in table V.1, MOTAP served many segments of the small business community during fiscal years 1993 and 1994. The 230 clients served represents a decrease of 9 percent from fiscal year 1993. The greatest area of concentration was in the manufacturing segment, which accounted for 64 percent of the clients served in fiscal year 1994. As shown in table V.2, MOTAP processed a total of 283 information requests during fiscal year 1994, a decrease of 34 percent from fiscal year 1993. Fifty-five percent of these requests were of a technical nature. Legal (patents and/or regulations) Although unsure why the number of clients served and requests answered declined in 1994 from the previous year, the state marketing specialist speculated that the floods Missouri experienced during July of 1993 reduced requests. Following the floods, many small businesses in Missouri may have been more concerned with repairing flood damage and related business slow downs than with identifying new business opportunities. MOTAP uses several methods to measure the effectiveness of its services, including client surveys, seminar evaluations, and comments received from clients following visits to its business sites. MOTAP applies information received from these efforts to adapt its services, communications, and management practices. MOTAP sends each client a satisfaction survey the quarter following the client’s MOTAP project. The survey asks questions concerning the quality of MOTAP services, the perceived value of its information, and the likelihood of obtaining similar information outside of MOTAP. The response rate for fiscal year 1994 was 29 percent. Client responses were generally positive. In summary, users found the information MOTAP provided to be helpful, current, concise, relevant, and of overall good quality. More than half of the respondents rated the financial value of the information higher than $150. Forty-three percent of the respondents, however, felt their chances were at least “somewhat likely” that they could have obtained the information outside of MOTAP. MOTAP experienced difficulties in evaluating the impact of its services because many respondents answered survey questions in a form that could not be tabulated. One reason is that respondents often provided descriptions of the ways they used the TAPP information but could not express its impact on their businesses in percentage or monetary terms. Another reason is the typical response rate on MOTAP questionnaires was approximately 25 percent. According to MOTAP officials, a rate this low does not allow a projection of the total program impact with any statistical confidence. Third, respondents often confused information obtained through the MOTAP program with information obtained through other SBDC services—which is understandable because MOTAP services are primarily delivered through SBDC counselors. The Missouri SBDC is updating its survey techniques to minimize the problems with evaluating its services. For example, the Missouri SBDC is developing an exit interview for clients so that the interviewer may ask follow-up questions that will help interpret the responses. Although planning to offer its clients MOTAP services after federal funding ends in 1995, the Missouri SBDC is not sure how the services will be funded or provided. According to SBDC officials, on-line database searching and expert services have been an integral part of the package of services offered by the SBDC. The SBDC will most likely downsize the center and save only the most critical parts. The Pennsylvania Business Intelligence Access System (BIAS) is a part of the Pennsylvania SBDC network and is affiliated with the University of Pennsylvania in Philadelphia. BIAS offers small business clients both on-line and off-line services in the form of literature searches, patent searches, expert consultations, and market analyses. These services are used to complement other services the SBDC offers these same clients. According to the Pennsylvania SBDC State Director, the primary emphasis of the BIAS program is education, also one of the main goals of TAPP. He said many of the BIAS presentations to clients are not sales presentations, but workshops with clear educational goals. In addition to providing on-line services, SBDC consultants explain and often demonstrate technology to clients. BIAS is implemented by the Ben Franklin Technology Center (BFTC), a small business incubator facility. The Pennsylvania SBDC contracted with the Business Information Center (BIC) of the BFTC to manage the BIAS program. The Pennsylvania SBDC State Director provides management oversight for BIAS. BIC is responsible for managing the research process and training both the SBDC consultants and the public. BIC also administers the contract with the database vendors—Telebase and Knowledge Express. Other vendors BIC can access include Batorlink, Internet, and Community of Science. These vendors provide access to more than 3,000 databases of business and technical information, including resumes of university experts from major research universities. BIAS is the only TAPP center that did not contract with Teltech for the first year of the program. Because BIAS has access to the Pennsylvania Technical Assistance Program (PENNTAP), a network of experts, it elected not to contract with Teltech. For the second year of the program, BIAS decided to experiment with Teltech to attract more of its clients to request expert searches. However, because demand for expert searches remained low, BIAS did not renew the Teltech contract for the third year. BIAS focuses on the manufacturing and technology sectors—particularly the advanced materials, biotechnology, and computer hardware and software development industries. BIAS also targets firms adversely affected by reductions in defense procurements, seventy percent of which are in manufacturing and technology-based industries. BIAS informs potential clients of its services through (1) personal contact with SBDC clients; (2) mailings and briefings to various trade organizations; (3) mailings to potential clients; (4) news media and on-line networks; and (5) seminars, workshops, and conferences attended by SBDC clients. Six months prior to federal TAPP funding, the BIC began providing on-line database searches to BFTC clients at a rate of $75 an hour plus expenses. TAPP funding enabled the SBDC to subscribe to services provided by the BIC and offer them to SBDC clients at a subsidized rate. BIAS charges its clients 70 percent of on-line expenses exceeding $75. Under the management of the SBDC assistant state director, two professional information specialists at BIC devote 50 percent of their time to the center. Other SBDC staff also provide assistance by informing other clients of BIAS services through their own consulting activities. BIAS can be accessed through any one of the 16 university-affiliated SBDCs or 70 community outreach offices. In contrast to other TAPP centers, Pennsylvania SBDC consultants are the main providers of BIAS services. After receiving training from the BIC’s senior information specialist, these consultants perform most of the database searches for SBDC clients. BIC information specialists support the SBDC consultants and provide assistance for particularly difficult search projects. According to SBDC officials, this arrangement makes the service more accessible to clients, expands the SBDC’s searching capacity, and strengthens the consultants’ database searching skills. Clients needing expert consultations are referred to PENNTAP, an in-state network of technical consultants. When using PENNTAP, clients are referred to technical experts by the PENNTAP regional staff person. These people identify the appropriate network expert and facilitate the consultation. Other experts can be identified using electronic databases. BIAS has been in TAPP from the beginning and has received $700,400 in federal funding. This included $200,000 in fiscal year 1992, $190,400 in fiscal year 1993, $170,000 in fiscal year 1994, and $140,000 in fiscal year 1995. BIAS has received matching funds from the state for each of these years. As shown in table VI.1, BIAS served many segments of the small business community during fiscal years 1993 and 1994. The 427 clients served represent an increase of 45 percent over fiscal year 1993. The greatest areas of concentration were in the manufacturing and service segments, which accounted for 70 percent of the clients served. As shown in table VI.2, BIAS responded to a total of 847 information requests during fiscal year 1994, an increase of 112 percent over fiscal year 1993. Only 18 percent of these information requests were of a technical nature. BIAS uses a brief mail survey to measure client satisfaction. The survey asks how BIAS information was used in the business, the financial value of the information, the likelihood of obtaining similar information outside of BIAS, and which type of information was most useful. Although the response rate for the fiscal year 1994 evaluations was nine percent, the clients’ responses were generally positive. In summary, clients found the information from BIAS to be concise and current and would recommend that other businesses contact BIAS. Forty-five percent valued the information at more than $100. However, 49 percent indicated their chances of obtaining similar information elsewhere was at least “somewhat likely.” Focus groups were also used to obtain input from clients and consultants concerning needs for on-line information. The information gained during the focus group sessions is used to inform BIAS staff of how to tailor the program to meet the needs of both clients and consultants. The SBDC plans to offer its clients BIAS services after federal TAPP funding ends in 1995. According to SBDC officials, BIAS services will be further incorporated into the SBDC’s basic operations while continuing to use BIC for many BIAS functions. SBDC officials believe that their arrangement with the BIC has been effective and will need only minor modifications in the future. Sources of funding being investigated include the state, other federal sources, and the private sector. The Texas Technology Access Program (TAP/Texas) is a part of the Texas Product Development Center (TPDC), a specialty center of the University of Houston SBDC. TAP/Texas offers small business clients both on-line and off-line services in the form of literature searches, patent searches, expert searches, and document delivery. TAP/Texas is managed by the Director of the TPDC with general oversight from the SBDC Director of the Houston Region. The TPDC and the SBDC are two of five functional areas under the University of Houston Institute for Enterprise Excellence. The other three functional areas are the Texas Manufacturing Assistance Center Gulf Coast, the Texas Information Procurement Service, and the International Trade Center. These five functions coordinate efforts to provide a full range of consulting services to small business clients. Clients of any of the five functional areas have access through TAP/Texas to more than one thousand databases through vendors like Knowledge Express, Dialog, Teltech, and Lexis/Nexis. Special in-state database resources are also available. These include the Mid-Continent Technology Transfer Center at Texas A&M University, TEXAS-ONE/Texas Marketplace, and the Texas Innovation Network System. These sources offer access to databases of the National Aeronautics and Space Administration (NASA) and federal laboratories, electronic bulletin boards containing directories of Texas companies, and access to technical experts and research facilities in Texas. TAP/Texas targets small manufacturers and technology-oriented service companies throughout Texas. TAP/Texas informs potential clients of its services through (1) personal contact with clients; (2) direct mail to targeted industries and trade associations; (3) participation in trade shows and conferences, including demonstrations of on-line capabilities; and (4) classroom workshops. The TPDC Director and one consultant at the TPDC work full time in the program while four additional staff provide support on a part-time basis. SBDC staff also provide assistance by informing clients of TAP/Texas services through counseling. TAP/Texas can be accessed through any one of 56 SBDC locations across the state. The methods by which TAP/Texas services are provided may vary depending on the situation. For example, the information specialist may conduct database searches independently after receiving a search request or interactively with the client guiding the search. Depending on the information requirements and time frames, the SBDC consultant and client may access databases directly from a remote location without the assistance of the information specialist. TAP/Texas has been a part of TAPP since it began in fiscal year 1992 and has received federal funds totaling $720,500 over the life of the program. This includes TAPP funding of $200,000 in fiscal year 1992, $190,400 in fiscal year 1993, $170,000 in fiscal year 1994, and $140,000 in fiscal year 1995. TAP/Texas also has received additional funds from the state, resulting in a total state and federal funding of $1,618,813 over the life of the program. To supplement funds available for on-line searches, TAP/Texas implemented a client fee structure in fiscal year 1994. Initial searches are free, but additional searches require a client co-payment. Fees collected for 114 co-payment searches total $2,744. As shown in table VII.1, TAP/Texas served many segments of the small business community during fiscal years 1993 and 1994. The 402 clients served in fiscal year 1994 represent an increase of 76 percent over the previous fiscal year. The greatest areas of concentration were in the manufacturing and service segments, which accounted for 63 percent of the clients served in fiscal year 1994. As shown in table VII.2, TAP/Texas responded to a total of 445 information requests during fiscal year 1994, an increase of 83 percent over the previous fiscal year. Thirty-three percent of these information requests were of a technical nature. Legal (patents and/or regulations) To measure client satisfaction, TAP/Texas uses a brief mail survey, which is distributed to clients immediately after the first data search is provided. The survey asks clients to evaluate the quality of customer service, the quality of data received from the searches, the accessibility of data outside of TAP/Texas, the value of the data received, and the type of data most critical for their needs. A follow-up letter is sent to nonrespondents after 30 days to increase the response rate. The response rate for the fiscal year 1994 client surveys was 32 percent. Client responses were generally positive. In summary, clients have found the information provided by TAP/Texas to be helpful, relevant, and of overall good quality. Fifty percent of the clients valued the information provided at more than $100. Forty-three percent of the respondents, however, felt their chances were at least “somewhat likely” that they would have obtained the information elsewhere. Focus groups are also used to obtain input from clients concerning on-line information needs. The information gained during the focus group sessions is used to inform TAP/Texas staff of how to tailor the services to meet the needs of both clients and consultants. The TPDC plans to offer its clients TAP/Texas services after federal funding ends in 1995, although officials are not sure how the program will be funded or what level of services will be available. On-line database searching is, and has been, an integral part of the package of services offered by the Institute for Enterprise Excellence. Depending on the future level of funding, however, the TPDC may have to reduce or even discontinue technology access services. The Wisconsin Technology Access Program (WisTAP) is a part of the Wisconsin SBDC and is affiliated with the University of Wisconsin. WisTAP helps small manufacturers and technology companies solve both technical and business management problems through technical counseling, on-line literature searches, and patent searches. These services are used to complement business management services offered these same clients by the SBDC. WisTAP is a decentralized program implemented through ten SBDCs located across the state. The central office in Whitewater coordinates the efforts of the other SBDCs while also providing counseling, assisting with the development of marketing plans, coordinating all remote literature searches, monitoring the activity level for each center, and offering support or shifting resources as needed. The WisTAP central office is staffed by a half-time Director and a half-time research specialist. The Wisconsin SBDC State Director provides management oversight for WisTAP. WisTAP targets small manufacturers and technology-based businesses. WisTAP has developed “marketing partners,” including various trade associations, state agencies, and regional and national technology transfer organizations, to leverage the marketing dollars available. Marketing partners provide mailing lists, underwrite mailings and promotional events, and assist with publications. WisTAP uses information provided by the marketing partners to assist them in targeted marketing efforts. For example, the Wisconsin Manufacturers and Commerce Association provided each SBDC with a database of its members. This database of over 8,500 manufacturers can be sorted by geographic area, type of company, and number of employees. The SBDC offices are able to use this information to reach small manufacturers in their area. The Wisconsin SBDC did not offer its clients technical counseling and on-line database searches prior to federal TAPP funding. WisTAP has added a new dimension to an SBDC by allowing it to broaden its focus to include technology access issues. Counselors at ten SBDC offices across the state and the Wisconsin Innovation Service Center are the primary deliverers of WisTAP services. Rather than locate database experts in a central location, WisTAP attempts to train all SBDC counselors at the various sites on database access. This organizational structure was developed in late 1993 to encourage “one stop” service delivery for WisTAP clients. By delivering WisTAP services through an SBDC counselor, clients may obtain the more traditional SBDC services (e.g., market analysis and management planning) in conjunction with technology access services. Teltech was the primary vendor for on-line services and access to technical experts during the first year of the program. Although WisTAP has been generally satisfied with the services offered by Teltech, the relative cost of its services has prompted WisTAP to identify alternative sources of information. Teltech is now a complement to WisTAP services rather than its primary provider. WisTAP has collaborative arrangements with a variety of sources of technical assistance and vendors. Examples include University-Industry Relations and Wisconsin Techsearch at the University of Wisconsin-Madison and the Office of Industrial Research and Technology Transfer at the University of Wisconsin-Milwaukee. These sources, among others, provide access to technical counseling by university faculty, database search and document delivery services, and other consulting services. Like the Wisconsin SBDC, WisTAP does not charge fees for its services. The Wisconsin SBDC does charge fees for training; however, none of these are credited to the WisTAP account. WisTAP has been a part of TAPP since it began in fiscal year 1992 and has received $700,400 in federal funds over the life of the program. This includes $200,000 in fiscal year 1992, $190,400 in fiscal year 1993, $170,000 in fiscal year 1994, and $140,000 in fiscal year 1995. WisTAP has received matching funds from the University of Wisconsin-Extension for each of these years, resulting in a total state and federal funding of $1,411,100 over the life of the program. As shown in table VIII.1, WisTAP served many segments of the small business community during fiscal years 1993 and 1994. The 445 clients served represents an increase of 16 percent from fiscal year 1993. The greatest area of concentration was in the manufacturing segment, which accounted for 71 percent of the clients served in fiscal year 1994. As shown in table VIII.2, WisTAP processed a total of 641 information requests during fiscal year 1994, a decrease of 39 percent from fiscal year 1993. Seventy-three percent of these requests were of a technical nature. Legal (patents and/or regulations) WisTAP attributes the decline in information requests to two factors. First, WisTAP changed its reporting practices from 1993 to 1994. The 1993 figures represent projects. A solution to a project may require several database interactions, thus having an inflationary effect on the 1993 figures. Secondly, a database vendor offered unlimited and free usage for the first quarter of fiscal year 1993. According to WisTAP officials, WisTAP increased their use of the service for its clients during this period. WisTAP uses a client satisfaction survey to measure the effectiveness of its services. Each quarter, WisTAP mails the survey to clients that had received services during the previous quarter. The survey asks questions concerning the quality of the services, the perceived value of the information, and the likelihood of obtaining similar information elsewhere. The response rate for the fiscal year 1994 client satisfaction survey was 46 percent. Eighty-eight percent of the respondents rated the overall quality of the information provided as good to excellent. Sixty-four percent rated their ability to access the information without WisTAP from somewhat unlikely to extremely unlikely. Sixty-two percent rated the financial value of the information received at more than $100. The Wisconsin SBDC plans to offer its clients WisTAP services after federal funding ends in 1995; however, the level of service will probably be cut in half. To prepare for the end of federal funding for TAPP, the Wisconsin SBDC has been focusing on developing relationships with new and existing network partners. For example, WisTAP has developed relationships with the staff of several University of Wisconsin technical and engineering departments. SBDC officials hope that, as more network partners gain experience working with small businesses, technical information will be accessible independent of WisTAP. The Oregon SBDC participated in TAPP during fiscal years 1992 and 1993. Through a contract agreement with the Oregon Innovation Center (OIC), the SBDC offered small business clients both on-line and off-line services in the form of literature searches, patent searches, expert consultations, and document location. Because of the loss of matching state funds for fiscal year 1994, the Oregon SBDC dropped out of TAPP. The OIC, however, has continued to provide TAPP-like services in the absence of state and federal financial support. The current program is managed and operated by the OIC, which assists businesses in accessing technical information. The OIC continues to offer TAPP-like services to its own clients and clients referred to them by the SBDCs, government agencies, and industry associations. The OIC serves primarily small manufacturers and technology-oriented service companies. OIC services are not limited to Oregon businesses; however, the majority of OIC clients are located in Oregon. When part of TAPP, the OIC informed potential clients of its services through SBDC marketing efforts, including seminars, pamphlets, and media publications. Now that the OIC is no longer directly affiliated with the SBDC, all marketing efforts have been eliminated because of funding constraints. The OIC relies entirely on word-of-mouth to attract new clients. One information specialist at the OIC devotes three-fourths time to the program. Staff of the SBDCs, state economic development agencies, and industry associations also assist by informing clients of OIC services through their own counseling activities. Because the OIC no longer participates in TAPP, it receives fewer referrals from the SBDCs. However, the clients that contact the OIC are more likely to represent technology-oriented industries, according to OIC officials. The OIC provides a range of business services including the development of marketing plans and information research. OIC clients have access to hundreds of on-line and off-line databases, including Dialog, Data-Star, CompuServe, Orbit, NASA, and the Federal Register. At the beginning of the program, OIC also provided access to Teltech. However, because of high costs and low demand to access Teltech experts, the OIC did not renew Teltech’s contract in July 1993. The OIC serves its clients primarily through remote database searches. Upon receipt of a request, the information specialist conducts the search and sends the results to the client. OIC staff rarely meet face-to-face with the client. Nearly all services are provided via telephone, facsimile machine, or computer. According to an OIC information specialist, the OIC has also developed the ability to conduct real-time, screen-to-screen searching. Also, client access is offered through a menu-driven bulletin board system. The OIC received $325,000 in federal funding during the 2 years it was in the program. This included $200,000 in fiscal year 1992 and $125,000 in fiscal year 1993. The OIC also received state matching funds for each of these years, resulting in a total state and federal funding of $650,000 over the life of the program. The OIC has not received any state or federal funding since the end of fiscal year 1993. In the spring of 1996, the OIC will occupy a new facility to be constructed as a joint project with the Central Oregon Community College. This project will be funded by the OIC’s state economic development appropriation that was committed in 1992. The OIC currently relies on donations and client fees to operate. According to OIC officials, client fees averaged $114 per search during 1994. During the TAPP years, clients were charged only about $10 per search although the total cost of the searches averaged $161. As shown in table IX.1, the OIC’s client base was dominated by manufacturing and service concerns in fiscal years 1993 and 1994. In 1994, service and manufacturing businesses accounted for 73 percent of the clients served overall. Because of increased client fees and the elimination of marketing outreach efforts, the number of clients served declined sharply—from 191 to 33—between 1993 and 1994. As shown in table IX.2, the OIC responded to a total of 99 information requests during fiscal year 1994—the first year in which the OIC did not participate in TAPP. This figure represents a decrease of 79 percent from fiscal year 1993. Twenty-three percent of these projects were of a technical nature. Legal (patents and/or regulations) During fiscal year 1993, the OIC conducted three focus group sessions in various locations to determine the informational needs of small businesses. Questions were asked to determine what types of information were the most difficult for small businesses to obtain, what sources small businesses typically use to obtain information, and what improvements they would suggest to provide them with business information. A recurring response from the participants was that marketing information was a primary concern and difficult to obtain. The OIC used the focus group results to gain a better understanding of the information needs of businesses. The OIC plans to continue providing TAPP-like services on a cost-recovery basis as it has been since the end of fiscal year 1993. The OIC hopes to supplement its budget through corporate donations. MCI Telecommunications Corporation, for example, recently donated $10,000 to the OIC. OIC officials said that a self-sufficient program has some advantages. One of these is that because a client makes a larger investment, it is more serious about its request for assistance. Also, the OIC has been able to provide services beyond the small business community, which has both expanded services and generated more funds. John P. Hunt, Jr., Assistant Director Robin M. Nazzaro, Assistant Director Frankie L. Fulton, Evaluator-in-Charge Paul W. Rhodes, Senior Evaluator Kenneth A. Davis, Evaluator Richard P. Cheston, Adviser The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Pilot Technology Access Program (TAPP), focusing on the: (1) program's effectiveness and impact on improving small business productivity and innovation; and (2) experiences and the lessons learned by the TAPP centers during the pilot program. GAO found that: (1) Congress has decided not to fund TAPP beyond fiscal year (FY) 1995; (2) one TAPP center has operated independently on a reduced scale since FY 1993 and the remaining five centers plan to continue operations beyond FY 1995, but they are not sure of their organization, services, and funding; (3) the five centers serviced 1,840 businesses in FY 1994, of which 59 percent were manufacturers and 66 percent were businesses just getting started; (4) TAPP services included technical and nontechnical information, and technical, patent, and marketing assistance; (5) although the program's impact could not be determined, TAPP clients were generally satisfied with the centers' operations and services; (6) center officials were generally pleased with their programs' development and believed that certain individual projects produced favorable results; and (7) lessons learned from TAPP that should be considered in designing future programs include adding more specificity to program goals and objectives, determining whether a separate and distinct federal program is necessary, determining the organizational type best suited to manage such a program, and deciding program funding options.
The Financial Report of the United States Government provides useful information on the government’s financial position at the end of the fiscal year and changes that have occurred over the course of the year. However, in evaluating the nation’s fiscal condition, it is critical to look beyond the short-term results and consider the overall long-term financial condition and long-term fiscal imbalance of the government—that is, the sustainability of the federal government’s programs, commitments, and responsibilities in relation to the resources expected to be available. More important than the large increase in the government’s net operating cost in fiscal year 2005 and persistent short-term budget deficits, fiscal simulations by GAO and others show that over the long term, we face large and growing structural deficits due primarily to known demographic trends, rising health care costs, and lower federal revenues relative to the economy. As I have testified before, the current financial reporting model does not clearly, comprehensively, and transparently show the wide range of responsibilities, programs, and activities that may either obligate the federal government to future spending or create an expectation for such spending. Thus, it provides a potentially unrealistic and misleading picture of the federal government’s overall performance, financial condition, and future fiscal outlook. The federal government’s gross debt in the U.S. government’s consolidated financial statements was about $8 trillion as of September 30, 2005. This number excludes such items as the current gap between the present value of future promised and funded Social Security and Medicare benefits, veterans’ health care, and a range of other liabilities (e.g., federal employee and veteran benefits payable), commitments, and contingencies that the federal government has pledged to support. Including these items, the federal government’s fiscal exposures now total more than $46 trillion, representing close to four times gross domestic product (GDP) in fiscal year 2005 and up from about $20 trillion or two times GDP in 2000. About one third of the approximately $26 trillion increase resulted from enactment of the Medicare prescription drug benefit in fiscal year 2004. (See table 1.) The federal government’s current fiscal exposures translate into a burden of about $156,000 per American or approximately $375,000 per full-time worker, up from $72,000 and $165,000 respectively, in 2000. Furthermore, these amounts do not include future costs resulting from Hurricane Katrina or the conflicts in Iraq and Afghanistan. In addition to the approximately $46 trillion of estimated fiscal exposures discussed above, there are exposures that are not included in those figures because the amounts of the exposures are not currently estimable. For example, the Department of Energy, in the footnotes to its fiscal year 2005 financial statements, disclosed that its environmental liability estimates do not include cleanup costs at sites for which there is no current feasible remediation approach, such as the nuclear explosion test area at the Nevada Test Site. It is important to understand the nature and extent of these types of additional exposures in the long-term fiscal planning for the federal government. Additionally, tax expenditure amounts are not required to be disclosed, nor are they disclosed, in agency or the U.S. government’s consolidated financial statements. Tax expenditures are reductions in tax revenues that result from preferential provisions, such as tax exclusions, credits, and deductions. These revenue losses reduce the resources available to fund other programs or they require higher tax rates to raise a given amount of revenue. As we reported in September 2005, the number of tax expenditures more than doubled since 1974, and the sum of tax expenditure revenue loss estimates tripled in real terms to nearly $730 billion in 2004. Under the most recent estimates, this has risen to more than $775 billion in 2005. Enhanced reporting on tax expenditures would ensure greater transparency and accountability for revenue forgone by the federal government and provide a more comprehensive picture of the federal government’s policies and fiscal position. Further, additional changes are needed to communicate important information to users about current operating results and the long-term financial condition of the U.S. government and annual changes therein. In particular, the government’s financial statements should clearly communicate to the user: the on-budget or operating results versus unified budget results for the year; the long-term sustainability of federal government programs—areas to consider include the relationship of the federal government’s existing commitments/responsibilities, including social insurance, to appropriate measures, such as GDP and per capita amounts, the government's long-term fiscal imbalance in relation to appropriate measures, such as GDP, and the magnitude of the potential alternatives for resolving the long term deficits, such as the rate of tax increases or spending reductions necessary to balance the government's long-term finances; inter-generational equity issues, e.g., assessing the extent to which different age groups may be required to assume financial burdens for commitments already made; and a liability at the governmentwide level for funds held by Social Insurance trust funds. Another tool that would serve to more effectively communicate the federal government’s finances to the public would be a Summary Annual Report. Such a report would summarize, in a clear, concise, and transparent manner, key financial and performance information included in the Financial Report of the United States Government. The federal government’s financial condition and long-term fiscal imbalance present enormous challenges to the nation’s ability to respond to emerging forces reshaping American society, the United States’ place in the world, and the future role of the federal government. GAO’s long-term simulations illustrate the magnitude of the fiscal challenges associated with an aging society and the significance of the related challenges the government will be called upon to address. Figures 1 and 2 present these simulations under two different sets of assumptions. In figure 1, we start with the Congressional Budget Office’s (CBO) 10-year baseline— constructed according to the statutory requirements for that baseline. Consistent with these requirements, discretionary spending is assumed to grow with inflation for the first 10 years and all tax cuts currently scheduled to expire are assumed to expire. After 2016, discretionary spending is assumed to grow at the same rate as the economy, and revenue is held constant as a share of GDP at the 2016 level. In figure 2, two assumptions are changed: (1) discretionary spending is assumed to grow at the same rate as the economy after 2006 rather than merely with inflation, and (2) all expiring tax provisions are extended. For both simulations, Social Security and Medicare spending is based on the 2005 Trustees’ intermediate cost projections, and we assume that benefits continue to be paid in full after the trust funds are exhausted. Medicaid spending is based on CBO’s December 2005 long-term projections under midrange assumptions. As these simulations illustrate, absent policy changes on the spending or revenue side of the budget, the growth in spending on federal retirement and health entitlements will encumber an escalating share of the government’s resources. Indeed, when we assume that all the temporary tax reductions are made permanent and discretionary spending keeps pace with the economy, our long-term simulations suggest that by 2040 federal revenues would be adequate to pay only some Social Security benefits and interest on the federal debt. Neither slowing the growth in discretionary spending nor allowing the tax provisions to expire—nor both together—would eliminate the imbalance. Although revenues will be part of the debate about our fiscal future, assuming no changes to Social Security, Medicare, Medicaid, and other drivers of the long-term fiscal gap would require at least a doubling of taxes—and that seems to be highly implausible. Accordingly, substantive reform of Social Security and our major health programs is critical to recapturing our future fiscal flexibility. Ultimately, the nation will have to decide what level of federal benefits and spending it wants and how it will pay for these benefits. Our current path also will increasingly constrain our ability to address emerging and unexpected budgetary needs and increase the burdens that will be faced by future generations. Continuing on this fiscal path will mean escalating and ultimately unsustainable federal deficits and debt that will serve to threaten the standard of living for the American people and ultimately our national security. As these simulations illustrate, regardless of the assumptions used, the problem is too big to be solved by economic growth alone or by making modest changes to existing spending and tax policies. Rather, a fundamental reexamination, reprioritization, and reengineering of major spending programs, tax policies, and government priorities will be important to recapture our fiscal flexibility and update our programs and priorities to respond to emerging social, economic, and security changes. Ultimately, this will likely require a national discussion about what Americans want from their government and how much they are willing to pay for those things. According to Statement of Federal Financial Accounting Standards (SFFAS) No. 21, Reporting Corrections of Errors and Changes in Accounting Principles, prior period financial statements presented should only be restated for corrections of errors, when such errors caused the financial statements to be materially misstated. Errors in financial statements can result from mathematical mistakes, mistakes in the application of accounting principles, or oversight or misuse of facts that existed at the time the financial statements were prepared. We continue to have concerns about the identification of misstatements in federal agencies’ prior year financial statements. At least 7 of the 24 CFO Act agencies restated certain of their fiscal year 2004 financial statements to correct errors. During fiscal year 2005, we reviewed the causes and nature of the restatements made by several Chief Financial Officers (CFO) Act agencies in fiscal year 2004 to their fiscal year 2003 financial statements and recommended improvements in internal controls and audit procedures to prevent or detect future similar errors. Generally, the reasons for the restatements we reviewed were agencies’ lack of effective internal controls over the processing and reporting of certain transactions and the failure of the auditors to design and/or perform adequate audit procedures to detect such errors. During our review, we noted that the extent of the restatements to the agencies’ fiscal year 2003 financial statements varied from agency to agency, ranging from correcting two line items on an agency’s balance sheet to correcting numerous line items on several of another agency’s financial statements. In some cases, the net operating results of the agency were affected by the restatement. The amounts of the agencies’ restatements ranged from several million dollars to more than $91 billion. Frequent restatements to correct errors can undermine public trust and confidence in both the entity and all responsible parties. Material internal control weaknesses discussed in our fiscal year 2005 audit report serve to increase the risk that additional errors may occur and not be identified on a timely basis by agency management or their auditors, resulting in further restatements. As has been the case for the previous eight fiscal years, the federal government did not maintain adequate systems or have sufficient reliable evidence to support certain material information reported in the U.S. government’s consolidated financial statements. These material deficiencies, which generally have existed for years, contributed to our disclaimer of opinion on the U.S. government’s consolidated financial statements for the fiscal years ended September 30, 2005, and 2004 and also constitute material weaknesses in internal control. Appendix I describes the material deficiencies in more detail and highlights the primary effects of these material weaknesses on the consolidated financial statements and on the management of federal government operations. These material deficiencies were the federal government’s inability to satisfactorily determine that property, plant, and equipment and inventories and related property, primarily held by the Department of Defense (DOD), were properly reported in the consolidated financial statements; reasonably estimate or adequately support amounts reported for certain liabilities, such as environmental and disposal liabilities, or determine whether commitments and contingencies were complete and properly reported; support significant portions of the total net cost of operations, most notably related to DOD, and adequately reconcile disbursement activity at certain federal agencies; adequately account for and reconcile intragovernmental activity and balances between federal agencies; ensure that the federal government’s consolidated financial statements were consistent with the underlying audited agency financial statements, balanced, and in conformity with GAAP; and resolve material differences that exist between the total net outlays reported in federal agencies’ Statements of Budgetary Resources and the records used by Treasury to prepare the Statements of Changes in Cash Balance from Unified Budget and Other Activities. Due to the material deficiencies and additional limitations on the scope of our work, as discussed in our audit report, there may also be additional issues that could affect the consolidated financial statements that have not been identified. In addition to the material weaknesses that represented material deficiencies, which were discussed above, we found the following four other material weaknesses in internal control as of September 30, 2005. These weaknesses are discussed in more detail in appendix II, including the primary effects of the material weaknesses on the consolidated financial statements and on the management of federal government operations. These material weaknesses were the federal government’s inability to implement effective processes and procedures for properly estimating the cost of certain lending programs, related loan guarantee liabilities, and value of direct loans; determine the extent to which improper payments exist; identify and resolve information security control weaknesses and manage information security risks on an ongoing basis; and effectively manage its tax collection activities. For fiscal year 2005, 18 of 24 CFO Act agencies were able to attain unqualified opinions on their financial statements by the November 15, 2005, reporting deadline established by the Office of Management and Budget (OMB) (see app. III). The independent auditor of the Department of State subsequently withdrew its qualified opinion on the department’s fiscal year 2005 financial statements and reissued an unqualified opinion on such financial statements dated December 14, 2005. As a result, 19 CFO Act agencies received unqualified opinions on their fiscal year 2005 financial statements. However, irrespective of these unqualified opinions, many agencies do not have timely, reliable, and useful financial information and effective controls with which to make informed decisions and ensure accountability on an ongoing basis. The ability to produce the data needed for efficient and effective management of day-to-day operations in the federal government and provide the necessary accountability to taxpayers and the Congress has been a long-standing challenge at most federal agencies. The results of the fiscal year 2005 Federal Financial Managers Integrity Act of 1996 (FFMIA) assessments performed by agency inspectors general or their contract auditors show that certain problems continue to affect financial management systems at most CFO Act agencies. These problems include nonintegrated financial systems, lack of accurate and timely recording of data, inadequate reconciliation procedures, and noncompliance with accounting standards and the U.S. Government Standard General Ledger (SGL). While the problems are much more severe at some agencies than at others, the nature and severity of the problems indicate that overall, management at most CFO Act agencies lack the complete range of information needed for accountability, performance reporting, and decision making. FFMIA requires auditors, as part of the CFO Act agencies’ financial statement audits, to report whether agencies’ financial management systems substantially comply with (1) federal financial management systems requirements, (2) applicable federal accounting standards, and (3) the SGL at the transaction level. The major barrier to achieving compliance with FFMIA continues to be the inability of agencies to meet federal financial management systems requirements, which involve not only core financial systems, but also administrative and programmatic systems. For fiscal year 2005, auditors for 18 of the 24 CFO Act agencies reported that the agencies’ financial management systems did not substantially comply with one or more of the FFMIA requirements noted above. For 5 of the remaining 6 CFO Act agencies, auditors provided negative assurance, meaning that nothing came to their attention indicating that the agencies’ financial management systems did not substantially meet FFMIA requirements. The auditors for these 5 agencies did not definitively state whether the agencies’ systems substantially complied with FFMIA requirements, as is required under the statute. In contrast, auditors for the Department of Labor provided positive assurance by stating that, in their opinion, the department’s financial management systems substantially complied with the requirements of FFMIA. Further, auditors for the Department of Energy and the General Services Administration reported that those agencies’ financial management systems did not substantially comply with FFMIA requirements in fiscal year 2005 due to recently identified internal control weaknesses over financial reporting. The auditors had not reported any FFMIA compliance issues at those 2 federal agencies in fiscal year 2004. As individual agencies move forward with various initiatives to address FFMIA-related problems, it is important that consideration be given to the numerous governmentwide initiatives under way to address long-standing financial management weaknesses. OMB continues to move forward on new initiatives to enhance financial management and provide results- oriented information in the federal government. Two ongoing developments in this area in fiscal year 2005 were the realignment of responsibilities formerly performed by the Joint Financial Management Improvement Program and its Program Management Office and the development of financial management lines of business. The overall vision of these initiatives is to eliminate duplicative roles, streamline financial management improvement efforts, and improve the cost, quality, and performance of financial management systems by leveraging shared services solutions. Three major impediments to our ability to render an opinion on the U.S. government’s consolidated financial statements continued to be: (1) serious financial management problems at DOD, (2) the federal government’s inability to adequately account for and reconcile intragovernmental activity and balances between federal agencies, and (3) the federal government’s ineffective process for preparing the consolidated financial statements. Extensive cooperative efforts between agency chief financial officers, inspectors general, Treasury officials, and OMB officials will be needed to resolve these serious obstacles to achieving an opinion on the U.S. government’s consolidated financial statements. Essential to improving financial management governmentwide and ultimately to achieving an opinion on the U.S. government’s consolidated financial statements is the resolution of serious weaknesses in DOD’s business operations. DOD’s financial management weaknesses are pervasive, complex, long standing, and deeply rooted in virtually all business operations throughout the department. To date, none of the military services or major DOD components has passed the test of an independent financial audit because of pervasive weaknesses in business management systems, processes, and internal control. Of the 25 areas on GAO’s governmentwide high-risk list, 8 are DOD programs or operations, and the department shares responsibility for 6 other high-risk areas that are governmentwide in scope. These weaknesses adversely affect the department’s (and the federal government’s) ability to control costs; ensure basic accountability; anticipate future costs and claims on the budget; measure performance; maintain funds control; prevent fraud, waste, and abuse; and address pressing management issues. Effective management, reporting, and decision making depend upon information that is timely, reliable, and useful. Recent actions taken by the department to develop an integrated strategy to better understand and initiate efforts to systematically transform and address weaknesses in its business operations are encouraging. On September 28, 2005, DOD approved two key components of its transformation strategy: the Business Enterprise Architecture and the Business Transition Plan. An enterprise architecture should provide a clear and comprehensive picture of an entity, whether it is an organization (e.g., a federal department) or a functional or mission area that cuts across more than one organization (e.g., financial management). This picture consists of snapshots of both the enterprise’s current “As Is” operational and technological environment and its target or “To Be” environment. A transition plan should provide the capital investment roadmap for transitioning from the current to the target environment by describing how and when new business systems will be developed and implemented. In November 2005, we reported that while DOD had made important progress toward building a foundation upon which to improve its business operations, it did not fully satisfy the requirements of the Ronald W. Reagan National Defense Authorization Act for 2005. For example, we reported that the architecture did not address how DOD would comply with federal accounting, financial, and reporting requirements, such as the United States Government Standard General Ledger. In late December 2005, DOD issued its Financial Improvement and Audit Readiness (FIAR) Plan, a third major component of its business transformation strategy. According to DOD briefings, the “purpose of the FIAR Plan is to provide a roadmap to guide the department in improving financial management and achieving a clean audit opinion.” Similar to an earlier DOD improvement effort, the Financial Improvement Initiative, the FIAR Plan utilizes an incremental approach to structure its process for examining its operations, diagnosing problems, planning corrective actions, and preparing for audit. However, unlike the previous plan, the FIAR Plan does not establish an overall goal of achieving a clean audit opinion on its departmentwide financial statements by a specific date. Rather, the FIAR Plan appears to recognize that it will take several years before DOD is able to implement the systems, processes, and other changes necessary to fully address its financial management weaknesses. In the interim, DOD plans to focus its initial efforts on four areas: (1) military equipment, (2) real property, (3) military retiree eligible health care fund liabilities, and (4) environmental liabilities. The FIAR Plan also focuses on the U.S. Marine Corps and the U.S. Army Corps of Engineers, Civil Works because these organizations intend to be ready for audit in fiscal years 2007 and 2008, respectively. As the FIAR Plan evolves, DOD intends to refine or include additional goals to improve processes and systems related to other balance sheet line items and financial statements. There will need to be ongoing and sustained top management attention to business transformation at DOD to address what are some of the most difficult financial management challenges in the federal government. As we noted in our November 2005 testimony, we continue to believe that the implementation of a new Chief Management Officer position at DOD will be needed in order for the department to succeed in its overall business transformation strategy. We will continue to monitor DOD’s efforts to transform its business operations and address its financial management deficiencies as part of our continuing DOD business enterprise architecture work and our oversight of DOD’s financial statement audit. Federal agencies are unable to adequately account for and reconcile intragovernmental activity and balances. OMB and Treasury require the CFOs of 35 executive departments and agencies to reconcile, on a quarterly basis, selected intragovernmental activity and balances with their trading partners. In addition, these agencies are required to report to Treasury, the agency’s inspector general, and GAO on the extent and results of intragovernmental activity and balances reconciliation efforts as of the end of the fiscal year. A substantial number of the agencies did not fully perform the required reconciliations for fiscal years 2005 and 2004. For fiscal year 2005, based on trading partner information provided in the Governmentwide Financial Reporting System discussed below, Treasury produced a “Material Difference Report” for each agency showing amounts for certain intragovernmental activity and balances that significantly differed from those of its corresponding trading partners. After analysis of the fiscal year 2005 “Material Difference Reports”, we noted a significant number of CFOs were still unable to explain their material differences with their trading partners. For both fiscal years 2005 and 2004, amounts reported by federal agency trading partners for certain intragovernmental accounts were significantly out of balance. As a result, the federal government’s ability to determine the impact of these differences on the amounts reported in the consolidated financial statements is impaired. Resolving the intragovernmental transactions problem remains a difficult challenge and will require a commitment by federal agencies and strong leadership and oversight by OMB. The federal government continued to have inadequate systems, controls, and procedures to ensure that the consolidated financial statements are consistent with the underlying audited agency financial statements, balanced, and in conformity with GAAP. During fiscal year 2005, Treasury continued the ongoing development of a new system, the Governmentwide Financial Reporting System (GFRS), to collect agency financial statement information directly from federal agencies’ audited financial statements. The goal of GFRS is to be able to directly link information from federal agencies’ audited financial statements to amounts reported in the consolidated financial statements, a concept that we strongly support, and to resolve many of the weaknesses we have identified in the process for preparing the consolidated financial statements. For the fiscal year 2005 reporting process, Treasury’s GFRS was able to capture certain agency financial information from agencies’ audited financial statements, but GFRS was still not at the stage that it could be used to fully compile the consolidated financial statements from the information captured. Treasury did, however, make progress in demonstrating that amounts in the consolidated Balance Sheet and Statement of Net Cost were consistent with federal agencies’ audited financial statements prior to eliminating intragovernmental activity and balances. In closing, given the federal government’s overall financial condition and long-term fiscal imbalance, the need for the Congress and the President to have timely, reliable, and useful financial and performance information is greater than ever. Sound decisions on the current results and future direction of vital federal government programs and policies are made more difficult without such information. Until the problems discussed in our audit report are adequately addressed, they will continue to have adverse implications for the federal government and the taxpayers. It will also be key that the appropriations, budget, authorizing, and oversight committees hold agency top leadership accountable for resolving these problems and that they support improvement efforts. Addressing the nation’s long-term fiscal imbalance constitutes a major transformational challenge that may take a generation or more to resolve. Given the size of the projected deficit, the U.S. government will not be able to grow its way out of this problem—tough choices will be required. Traditional incremental approaches to budgeting will need to give way to more fundamental and periodic reexaminations of the base of government. Our report, 21st Century Challenges: Reexamining the Base of the Federal Government, is intended to support the Congress in identifying issues and options that could help address these fiscal pressures. Further, the Congress needs to have access to the long-term cost of selected spending and tax proposals before they enact related laws. The fiscal risks previously mentioned can be managed only if they are properly accounted for and publicly disclosed, including the many existing commitments facing the federal government. New reporting approaches, as well as enhanced budget processes and control mechanisms, are needed to better understand, monitor, and manage the impact of spending and tax policies over the long term. In addition, a set of key national, outcome-based performance metrics would inform strategic planning, enhance performance and accountability reporting, and help to assess the impact of various spending programs and tax policies. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have at this time. For further information regarding this testimony, please contact Jeffrey C. Steinhoff, Managing Director, and Gary T. Engel, Director, Financial Management and Assurance, at (202) 512-2600. The continuing material deficiencies discussed below contributed to our disclaimer of opinion on the federal government’s consolidated financial statements for fiscal years 2005 and 2004. The federal government did not maintain adequate systems or have sufficient, reliable evidence to support information reported in the consolidated financial statements, as described below. The federal government could not satisfactorily determine that property, plant, and equipment (PP&E) and inventories and related property were properly reported in the consolidated financial statements. Most of the PP&E and inventories and related property are the responsibility of the Department of Defense (DOD). As in past years, DOD did not maintain adequate systems or have sufficient records to provide reliable information on these assets. Other agencies, most notably the National Aeronautics and Space Administration, reported continued weaknesses in internal control procedures and processes related to PP&E. Without reliable asset information, the federal government does not fully know the assets it owns and their location and condition and cannot effectively (1) safeguard assets from physical deterioration, theft, or loss; (2) account for acquisitions and disposals of such assets; (3) ensure that the assets are available for use when needed; (4) prevent unnecessary storage and maintenance costs or purchase of assets already on hand; and (5) determine the full costs of programs that use these assets. The federal government could not reasonably estimate or adequately support amounts reported for certain liabilities. For example, DOD was not able to estimate with assurance key components of its environmental and disposal liabilities. In addition, DOD could not support a significant amount of its estimated military postretirement health benefits liabilities included in federal employee and veteran benefits payable. These unsupported amounts related to the cost of direct health care provided by DOD-managed military treatment facilities. Further, the federal government could not determine whether commitments and contingencies, including those related to treaties and other international agreements entered into to further the U.S. government’s interests, were complete and properly reported. Problems in accounting for liabilities affect the determination of the full cost of the federal government’s current operations and the extent of its liabilities. Also, improperly stated environmental and disposal liabilities and weak internal control supporting the process for their estimation affect the federal government’s ability to determine priorities for cleanup and disposal activities and to appropriately consider future budgetary resources needed to carry out these activities. In addition, when disclosures of commitments and contingencies are incomplete or incorrect, reliable information is not available about the extent of the federal government’s obligations. The previously discussed material deficiencies in reporting assets and liabilities, material deficiencies in financial statement preparation, as discussed below, and the lack of adequate disbursement reconciliations at certain federal agencies affect reported net costs. As a result, the federal government was unable to support significant portions of the total net cost of operations, most notably related to DOD. With respect to disbursements, DOD and certain other federal agencies reported continued weaknesses in reconciling disbursement activity. For fiscal years 2005 and 2004, there was unreconciled disbursement activity, including unreconciled differences between federal agencies’ and the Department of the Treasury’s records of disbursements and unsupported federal agency adjustments, totaling billions of dollars, which could also affect the balance sheet. Unreliable cost information affects the federal government’s ability to control and reduce costs, assess performance, evaluate programs, and set fees to recover costs where required. Improperly recorded disbursements could result in misstatements in the financial statements and in certain data provided by federal agencies for inclusion in the President’s budget concerning obligations and outlays. Federal agencies are unable to adequately account for and reconcile intragovernmental activity and balances. The Office of Management and Budget (OMB) and Treasury require the Chief Financial Officers (CFO) of 35 executive departments and agencies to reconcile, on a quarterly basis, selected intragovernmental activity and balances with their trading partners. In addition, these agencies are required to report to Treasury, the agency’s inspector general, and GAO on the extent and results of intragovernmental activity and balances reconciliation efforts as of the end of the fiscal year. A substantial number of the agencies did not fully perform the required reconciliations for fiscal years 2005 and 2004. For these fiscal years, based on trading partner information provided in the Governmentwide Financial Reporting System (GFRS), Treasury produced a “Material Difference Report” for each agency showing amounts for certain intragovernmental activity and balances that significantly differed from those of its corresponding trading partners. After analysis of the “Material Difference Reports” for fiscal year 2005, we noted a significant number of CFOs were still unable to explain the differences with their trading partners. For both fiscal years 2005 and 2004, amounts reported by federal agency trading partners for certain intragovernmental accounts were significantly out of balance. In addition, about 25 percent of the significant federal agencies reported internal control weaknesses regarding reconciliations of intragovernmental activity and balances. As a result, the federal government’s ability to determine the impact of these differences on the amounts reported in the consolidated financial statements is impaired. Fiscal year 2005 was the second year that Treasury used its GFRS to collect agency financial statement information taken directly from federal agencies’ audited financial statements. The goal of GFRS is to be able to directly link information from federal agencies’ audited financial statements to amounts reported in the U.S. government’s consolidated financial statements and resolve many of the weaknesses we previously identified in the process for preparing the consolidated financial statements. For both the fiscal year 2005 and 2004 reporting processes, GFRS was able to capture agency financial information, but GFRS was still not at the stage that it could be used to fully compile the consolidated financial statements from the information captured. Therefore, for fiscal year 2005 Treasury continued to primarily use manual procedures to prepare the consolidated financial statements. As discussed in the scope limitations section of our audit report, Treasury could not produce the fiscal year 2005 consolidated financial statements and supporting documentation in time for us to complete all of our planned auditing procedures. In addition, the federal government continued to have inadequate systems, controls, and procedures to ensure that the consolidated financial statements are consistent with the underlying audited agency financial statements, balanced, and in conformity with U.S. generally accepted accounting principles (GAAP). Specifically, during our fiscal year 2005 audit, we found the following Treasury’s process for compiling the consolidated financial statements did not ensure that the information in all of the five principal financial statements and notes was fully consistent with the underlying information in federal agencies’ audited financial statements and other financial data. Treasury made progress in demonstrating amounts in the Balance Sheet and the Statement of Net Cost were consistent with federal agencies’ audited financial statements prior to eliminating intragovernmental activity and balances. However, about 25 percent of the significant federal agencies’ auditors reported internal control weaknesses related to the processes the agencies perform to provide financial statement information to Treasury for preparing the consolidated financial statements. To make the fiscal years 2005 and 2004 consolidated financial statements balance, Treasury recorded a net $4.3 billion decrease and a net $3.4 billion increase, respectively, to net operating cost on the Statements of Operations and Changes in Net Position, which it labeled “Unreconciled Transactions Affecting the Change in Net Position.” An additional net $3.2 billion and $1.2 billion of unreconciled transactions were recorded in the Statement of Net Cost for fiscal years 2005 and 2004, respectively. Treasury is unable to fully identify and quantify all components of these unreconciled activities. The federal government did not have an adequate process to identify and report items needed to reconcile the operating results, which for fiscal year 2005 showed a net operating cost of $760 billion, to the budget results, which for the same period showed a unified budget deficit of $318.5 billion. In addition, a net $13.2 billion “net amount of all other differences” was needed to force this statement into balance. Treasury’s ability to eliminate certain intragovernmental activity and balances continues to be impaired by the federal agencies’ problems in handling their intragovernmental transactions. As discussed above, amounts reported for federal agency trading partners for certain intragovernmental accounts were significantly out of balance, resulting in the need for unsupported intragovernmental elimination entries in order to force the Statement of Operations and Changes in Net Position into balance. In addition, significant differences in other intragovernmental accounts, primarily related to transactions with the General Fund, have not been reconciled and still remain unresolved. Therefore, the federal government continues to be unable to determine the impact of unreconciled intragovernmental activity and balances on the consolidated financial statements. Treasury lacked a process to ensure that fiscal years 2005 and 2004 consolidated financial statements and notes were comparable. Certain information reported for fiscal 2004 may require reclassification to be comparable to the fiscal year 2005 amounts. However, Treasury did not analyze this information or reclassify amounts within various financial statement line items and notes to enhance comparability. For example, the Reconciliations of Net Operating Cost and Unified Budget Deficit showed $47.8 billion and $.2 billion for property, plant, and equipment disposals and revaluations for fiscal years 2005 and 2004, respectively. However, based on the financial information provided by agencies to Treasury in GFRS, the fiscal year 2004 amount would be $25.4 billion. The difference would be reclassified from the net amount of all other differences line item on the Reconciliations of Net Operating Cost and Unified Budget Deficit. Treasury did not have an adequate process to ensure that the financial statements, related notes, Stewardship Information, and Supplemental Information are presented in conformity with GAAP. For example, we found that certain financial information required by GAAP was not disclosed in the consolidated financial statements. Treasury submitted a proposal to the Federal Accounting Standards Advisory Board (FASAB) seeking to amend previously issued standards and eliminate or lessen the disclosure requirements for the consolidated financial statements so that GAAP would no longer require certain of the information that Treasury has not been reporting. Comments are due to the FASAB today, on an exposure draft of a proposed FASAB standard, based on the Treasury proposal. Treasury stated that it is waiting for FASAB approval and issuance of this proposed standard to determine the disclosures that will be required in future consolidated financial statements. As a result of Treasury not providing us with adequate documentation of its rationale for excluding the currently required information and certain of the material deficiencies noted above, we were unable again to determine if the missing information was material to the consolidated financial statements. Information system weaknesses existed within the segments of GFRS that were used during the fiscal years 2005 and 2004 reporting processes. We found that the GFRS database (1) was not configured to prevent the alteration of data submitted by federal agencies and (2) was used for both production and testing during the reporting processes. Therefore, information submitted by federal agencies within GFRS is not adequately protected against unauthorized modification or loss. In addition, Treasury was unable to explain why numerous GFRS users appeared to have inappropriate access to GFRS agency information or demonstrate the appropriate segregation of duties exist. Although Treasury made progress in addressing them, certain other internal control weaknesses in its process for preparing the consolidated financial statements continued to exist and involved a lack of (1) appropriate documentation of certain policies and procedures for preparing the consolidated financial statements, (2) adequate supporting documentation for certain adjustments made to the consolidated financial statements, and (3) necessary management reviews. The consolidated financial statements include financial information for the executive, legislative, and judicial branches, to the extent that federal agencies within those branches have provided Treasury such information. However, there are undetermined amounts of assets, liabilities, costs, and revenues that are not included, and the federal government did not provide evidence or disclose in the consolidated financial statements that the excluded financial information was immaterial. Treasury did not have the infrastructure to address the magnitude of the fiscal year 2005 financial reporting challenges it was faced with, such as an incomplete financial reporting system, compressed time frames for compiling the financial information, and lack of adequate internal control over the financial statement preparation process. We found that personnel at Treasury’s Financial Management Service had excessive workloads that required an extraordinary amount of effort and dedication to compile the consolidated financial statements; however, there were not enough personnel with specialized financial reporting experience to ensure reliable financial reporting by the reporting date. Treasury, in coordination with OMB, had not provided us with adequate documentation evidencing an executable plan of action and milestones for short-term and long-range solutions for certain internal control weaknesses we have previously reported regarding the process for preparing the consolidated financial statements. OMB Circular A-136, Financial Reporting Requirements, which incorporated and updated OMB Bulletin No. 01-09, Form and Content of Agency Financial Statements, states that outlays in federal agencies’ Statement of Budgetary Resources (SBR) should agree with the net outlays reported in the Budget of the United States Government. In addition, Statement of Federal Financial Accounting Standards No. 7, Accounting for Revenue and Other Financing Sources and Concepts for Reconciling Budgetary and Financial Accounting, requires explanation of any material differences between the information required to be disclosed (including net outlays) in the financial statements and the amounts described as “actual” in the Budget of the United States Government. The federal government reported in the Statement of Changes in Cash Balance from Unified Budget and Other Activities (Statement of Changes in Cash Balance) and the Reconciliations of Net Operating Cost and Unified Budget Deficit (Reconciliation Statement) budget deficits for fiscal years 2005 and 2004 of $318.5 billion and $412.3 billion, respectively. The budget deficit is calculated by subtracting actual budget outlays from actual budget receipts. As we have reported since fiscal year 2003, we found material unreconciled differences between the total net outlays reported in selected federal agencies’ SBRs and Treasury’s central accounting records, which it uses to prepare the Statement of Changes in Cash Balance. Treasury’s processes for preparing the Statement of Changes in Cash Balance do not include procedures for identifying and resolving differences between its central accounting records and net outlay amounts reported in agencies’ SBRs. In fiscal year 2004, we noted reported internal control weaknesses regarding certain agencies’ SBRs. In fiscal year 2005, several agencies’ auditors reported internal control weaknesses (1) affecting the agencies’ SBRs, and (2) relating to monitoring, accounting, and reporting of budgetary transactions. These weaknesses could affect the reporting and calculation of the net outlay amounts in the agencies’ SBRs. In addition, such weaknesses transcend to agencies’ ability to also report reliable budgetary information to Treasury and OMB and may affect the unified budget outlays reported by Treasury in its Combined Statement of Receipts, Outlays, and Balances, and certain amounts reported in the Budget of the United States Government. OMB has been working with agencies to reduce the differences between the total net outlays reported in the federal agencies’ SBRs and the Statement of Changes in Cash Balance. In June 2005, OMB issued its Differences Between FY 2004 Budget Execution Reports and Financial Statements for CFO Act Agencies report which discusses various types of differences in federal agency financial statements and budget execution reports, including net outlays, and makes recommendations for OMB and federal agencies to consider in improving both sets of reports in the future. Until the material differences between the total net outlays reported in the federal agencies’ SBRs and the records used to prepare the Statement of Changes in Cash Balance are timely reconciled, the effect of these differences on the U.S. government’s consolidated financial statements will be unknown. The federal government did not maintain effective internal control over financial reporting (including safeguarding assets) and compliance with significant laws and regulations as of September 30, 2005. In addition to the material deficiencies discussed in appendix I, we found the following four other material weaknesses in internal control. Federal agencies continue to have material weaknesses and reportable conditions related to their lending activities. The Department of Housing and Urban Development lacked adequate management reviews of underlying data and cost estimation methodologies that resulted in material errors being undetected, and significant adjustments were needed. In addition, the Department of Education’s processes do not provide for a robust budget-to-actual cost comparison or facilitate assessments of the validity of its lending program cost estimates. While the Small Business Administration made substantial progress to improve its cost-estimation processes, additional improvements are still needed to ensure that year-end reporting is accurate. These deficiencies plus others at the Department of Agriculture relating to the processes and procedures for estimating program costs continue to adversely affect the federal government’s ability to support annual budget requests for these programs, make future budgetary decisions, manage program costs, and measure the performance of lending activities. Further, these weaknesses and the complexities associated with estimating the costs of lending activities greatly increase the risk that significant errors in agency and governmentwide financial statements could occur and go undetected. While agencies have made progress in implementing processes and controls to identify, estimate, and reduce improper payments, such improper payments are a long-standing, widespread, and significant problem in the federal government. The Congress acknowledged this problem by passing the Improper Payment Information Act of 2002 (IPIA). The IPIA requires agencies to review all programs and activities, identify those that may be susceptible to significant improper payments, estimate and report the annual amount of improper payments for those programs, and implement actions to cost-effectively reduce improper payments. Further, in fiscal year 2005, the Office of Management and Budget (OMB) began to separately track the elimination of improper payments under the President’s Management Agenda. Significant challenges remain to effectively achieve the goals of the IPIA. From our review of agencies’ fiscal year 2005 Performance and Accountability Reports (PARs), we noted that some agencies still have not instituted a systematic method of reviewing all programs and activities, have not identified all programs susceptible to significant improper payments, and/or have not annually estimated improper payments for their high-risk programs. For example, seven major agency programs with outlays totaling about $280 billion, including Medicaid and the Temporary Assistance For Needy Families programs, still cannot annually estimate improper payments, even though they were required by OMB to report such information beginning with their fiscal year 2003 budget submissions. In addition, two agency auditors that tested compliance with IPIA cited agency noncompliance with the act in their annual audit reports. Federal agencies’ estimates of improper payments, based on available information, for fiscal year 2005 exceeded $38 billion, a net decrease of about $7 billion, or 16 percent, from the prior year improper payment estimate of $45 billion. This decrease was attributable to the following factors. In fiscal year 2005, the Department of Health and Human Services reported a $9.6 billion decrease in its Medicare program improper payment estimate, principally due to improvements in its due diligence with providers to ensure the necessary documentation is in place to support payment claims. However, in fiscal year 2005, this decrease was partially offset as a result of more programs reporting estimates of improper payments. Although progress has been made, serious and widespread information security control weaknesses continue to place federal assets at risk of inadvertent or deliberate misuse, financial information at risk of unauthorized modification or destruction, sensitive information at risk of inappropriate disclosure, and critical operations at risk of disruption. GAO has reported information security as a high-risk area across government since February 1997. Such information security control weaknesses could result in compromising the reliability and availability of data that are recorded in or transmitted by federal financial management systems. A primary reason for these weaknesses is that federal agencies have not yet fully institutionalized comprehensive security management programs, which are critical to identifying information security control weaknesses, resolving information security problems, and managing information security risks on an ongoing basis. The Congress has shown continuing interest in addressing these risks, as evidenced with hearings on Federal Information Security Management Act of 2002 implementation and information security. In addition, the administration has taken important actions to improve information security, such as revising agency internal control requirements in OMB Circular A-123 and issuing extensive guidance on information security. Material internal control weaknesses and systems deficiencies continue to affect the federal government’s ability to effectively manage its tax collection activities, an issue that has been reported in our financial statement audit reports for the past 8 years. Due to errors and delays in recording taxpayer information, payments, and other activities, taxpayers were not always credited for payments made on their taxes owed, which could result in undue taxpayer burden. In addition, the federal government did not always follow up on potential unreported or underreported taxes and did not always pursue collection efforts against taxpayers owing taxes to the federal government. Weaknesses in controls over tax collection activities continue to affect the federal government’s ability to efficiently and effectively account for and collect revenue. Additionally, weaknesses in financial reporting of revenues affect the federal government’s ability to make informed decisions about collection efforts. As a result, the federal government is vulnerable to loss of tax revenue and exposed to potentially billions of dollars in losses due to inappropriate refund disbursements.
GAO is required by law to annually audit the consolidated financial statements of the U.S. government. The Congress and the President need to have timely, reliable, and useful financial and performance information. Sound decisions on the current results and future direction of vital federal government programs and policies are made more difficult without such information. Until the problems discussed in GAO's audit report on the U.S. government's consolidated financial statements are adequately addressed, they will continue to (1) hamper the federal government's ability to reliably report a significant portion of its assets, liabilities, costs, and other information; (2) affect the federal government's ability to reliably measure the full cost as well as the financial and nonfinancial performance of certain programs and activities; (3) impair the federal government's ability to adequately safeguard significant assets and properly record various transactions; and (4) hinder the federal government from having reliable financial information to operate in an economical, efficient, and effective manner. For the ninth consecutive year, certain material weaknesses in internal control and in selected accounting and financial reporting practices resulted in conditions that continued to prevent GAO from being able to provide the Congress and American people an opinion as to whether the consolidated financial statements of the U.S. government are fairly stated in conformity with U.S. generally accepted accounting principles. Three major impediments to an opinion on the consolidated financial statements continued to be (1) serious financial management problems at the Department of Defense, (2) the federal government's inability to adequately account for and reconcile intragovernmental activity and balances between federal agencies, and (3) the federal government's ineffective process for preparing the consolidated financial statements. Further, in our opinion, as of September 30, 2005, the federal government did not maintain effective internal control over financial reporting and compliance with significant laws and regulations due to numerous material weaknesses. More troubling still is the federal government's overall financial condition and long-term fiscal imbalance. While the fiscal year 2005 budget deficit was lower than 2004, it was still very high, especially given the impending retirement of the "baby boom" generation and rising health care costs. Importantly, as reported in the fiscal year 2005 Financial Report of the United States Government, the federal government's accrual-based net operating cost--the cost to operate the federal government--increased to $760 billion in fiscal year 2005 from $616 billion in fiscal year 2004. This represents an increase of about $144 billion or 23 percent. The federal government's gross debt was about $8 trillion as of September 30, 2005. This number excludes such items as the gap between the present value of future promised and funded Social Security and Medicare benefits, veterans' health care, and a range of other liabilities, commitments, and contingencies that the federal government has pledged to support. Including these items, the federal government's fiscal exposures now total more than $46 trillion, representing close to four times gross domestic product (GDP) in fiscal year 2005 and up from about $20 trillion or two times GDP in 2000. Given these and other factors, a fundamental reexamination of major spending programs, tax policies, and government priorities will be important and necessary to put us on a prudent and sustainable fiscal path. This will likely require a national discussion about what Americans want from their government and how much they are willing to pay for those things. We continue to have concerns about the identification of misstatements in federal agencies' prior year financial statements. Frequent restatements to correct errors can undermine public trust and confidence in both the entity and all responsible parties. The material internal control weaknesses discussed in this testimony serve to increase the risk that additional errors may occur and not be identified on a timely basis by agency management or their auditors, resulting in further restatements.
Under the Government Performance and Results Act of 1993 (GPRA) federal agencies are expected to focus on achieving results and to demonstrate, in annual performance reports and budget requests, how their activities help achieve agency or governmentwide goals. In 2002, to encourage greater use of program performance information in decision making, the Office of Management and Budget (OMB) created the Program Assessment Rating Tool (PART). PART was intended to provide a consistent approach for evaluating federal programs within the executive budget formulation process. However, because PART conclusions rely on available program performance and evaluation information, many of the initial recommendations focused on improving outcome and efficiency measures. Although GPRA and PART helped improve the availability of better performance measures, we and OMB have noted that this did not result in their greater use by the Congress or agencies. In October 2009, OMB announced a plan to strengthen federal program evaluation, noting that rigorous independent program evaluations can help determine whether government programs are achieving their intended outcomes as well as possible and at the lowest possible cost. Program evaluations are systematic studies that assess how well a program is working, and they are individually tailored to address the client’s research question. Process (or implementation) evaluations assess the extent to which a program is operating as intended. Outcome evaluations assess the extent to which a program is achieving its outcome-oriented objectives but may also examine program processes to understand how outcomes are produced. When external factors such as economic or environmental conditions are known to influence a program’s outcomes, Impact evaluations may be used to measure a program’s net effect by comparing outcomes with an estimate of what would have occurred had there been no program intervention. Thus, program evaluation can provide an important complement to agency performance data that simply track progress toward goals. In announcing the evaluation initiative, the OMB Director expressed concern that many important programs had never been evaluated, evaluations had not sufficiently shaped budget priorities or management practices, and many agencies lack an evaluation office capable of supporting an ambitious strategic research agenda. The initiative consisted of three efforts: posting information online on all agencies’ planned and ongoing impact evaluations, establishing an interagency group to promote the sharing of evaluation expertise, and funding some new agency rigorous impact evaluations and capacity strengthening efforts. As part of the fiscal year 2011 budget process, OMB allocated approximately $100 million for the evaluation initiative to support 35 rigorous program evaluations and evaluation capacity-building proposals. OMB made a similar evaluation solicitation for the fiscal year 2012 budget in which nonsecurity agencies were asked to reduce their discretionary budgets by 5 percent. The budget process evaluation initiative is focused on impact evaluations and is not intended to cover the full range of an agency’s evaluation activities. However, to be considered for additional evaluation funding, agencies must demonstrate that they are both using existing evaluation resources effectively and beginning to integrate evaluation into program planning and implementation. With significant efforts under way to increase agencies’ evaluation resources, it is especially timely now to learn how agencies with more evaluation experience prioritize their resources. A recent GAO review identified three elements that leading national research organizations consider essential to a sound federal research and evaluation program: research independence, transparency and accountability, and policy relevance. These elements align well with OMB’s new evaluation initiative and expectations for a better integration of evaluation into program design and management. In this report, we do not assess the quality of the agencies’ research agendas or their achievement of these objectives. However, we do describe practices these agencies took that were designed to achieve those elements. The Department of Education establishes policy for, administers, and coordinates most federal assistance to elementary, secondary, and postsecondary education. The department has supported educational research, evaluation, and dissemination not only since the Congress created it in 1979 but also earlier, when it was the Office of Education. For several years, two central offices in the Department of Education have been responsible for program and policy evaluation. The Policy and Program Studies Service (PPSS), in the Office of Planning, Evaluation, and Policy Development (OPEPD), advises the Secretary on policy development and review, strategic planning, performance measurement, and evaluation. The Institute of Education Sciences (IES), established in 2002 (replacing the Office of Educational Research and Improvement), is the research arm of the department. IES is charged with producing rigorous evidence on which to ground education practice and policy, with program evaluation housed primarily in the National Center for Education Evaluation and Regional Assistance (NCEE). In 2009, the Department of Education launched a review of its evaluation activities, comparing them to those of other government agencies, seeking to build analytic capacity, and intending to use available knowledge and evidence more effectively. This review resulted in a comprehensive, departmentwide evaluation planning process and clarified the distinct evaluation responsibilities of the two offices. Starting in 2010, OPEPD was to lead the planning process, in partnership with IES, to identify the department’s key priorities for evaluation and related knowledge-building activities. Starting in fiscal year 2011, NCEE in IES will be responsible for longer-term (18 months or longer) program implementation and impact studies, while PPSS in OPEPD will focus on shorter-term evaluation activities (fewer than 18 months), policy analysis, performance measurement, and knowledge management activities. Some program offices also conduct evaluation activities separate from studies conducted by either of the central offices, such as supporting grantee evaluations or analyzing grantee performance data for smaller programs where larger- scale evaluations are not practical. The Department of Housing and Urban Development is the principal federal agency responsible for programs on housing needs, fair housing opportunities, and community improvement and development. It insures home mortgages, subsidizes housing for low- and moderate-income families, promotes and enforces fair housing and equal opportunity housing, and provides grants to states and communities to aid community development. At HUD, program evaluation is primarily centralized in one office—the Office of Policy Development and Research (PD&R)—created in 1973. It conducts a mix of surveys, independent research, demonstrations, policy analyses, and short- and long-term evaluations that inform HUD’s decisions on policies, programs, and budget and legislative proposals. PD&R provides HUD’s program offices with technical support, data, and materials relevant to their programs. Although the primary responsibility for evaluating programs falls to PD&R, some evaluation is found in program offices, such as the Office of Housing, which routinely conducts analyses to update its loan performance models for assessing credit risk and the value of its loan portfolio. In 2006, the Congress, concerned about the quality of HUD research, commissioned the National Research Council (NRC) to evaluate the PD&R office and provide recommendations regarding the course of future HUD research. A 2008 NRC report noted declining resources for data collection and research and insufficient external input to its research agenda. On the heels of the report, the scope of the current economic and housing crisis led the incoming administration to acknowledge a need both to reform and transform HUD and to sustain a commitment of flexible budget resources for these efforts. In 2009, HUD proposed a departmentwide Transformation Initiative of organizational and program management improvements to position HUD as a high-performing organization. In fiscal year 2010, much of PD&R’s research and evaluation activities are funded through a set-aside created for the initiative, which also supports program measures, demonstrations, technical assistance, and information technology projects. Evaluation planning is decentralized at the Department of Health and Human Services. We reviewed ACF and CDC because they have significant evaluation experience. HHS’s centrally located Office of the Assistant Secretary for Planning and Evaluation (ASPE) coordinates agency evaluation activities, reports to the Congress on the department’s evaluations, and conducts studies on broad, cross-cutting issues while relying on agencies to evaluate their own programs. In some cases, ASPE conducts independent evaluations of programs housed within other HHS operating and staff divisions (for example, ACF and CDC). ACF oversees and helps finance programs to improve the economic and social well-being of families, individuals, and communities—the Head Start program is an example. It also assists state programs for child support enforcement as well as Temporary Assistance to Needy Families (TANF). The Office of Planning, Research, and Evaluation (OPRE) is the principal office for managing evaluation at ACF. It also provides guidance, analysis, technical assistance, and oversight related to strategic planning, performance measurement, research, and evaluation methods. It conducts statistical, policy, and program analyses and synthesizes and disseminates research and demonstration findings. OPRE consults with outside groups on ideas that feed into program and evaluation planning. In each policy area with substantial evaluation resources, OPRE consults with a group of researchers, program partners, and other content area experts who share their knowledge and ideas for research and evaluation. CDC, as part of the Public Health Service, is charged with protecting the public health by developing and providing to persons and communities information and tools for preventing and controlling disease, promoting health, and preparing for new health threats. It supports some evaluation activities through the Public Health Service (PHS) evaluation set-aside; in 2010 the Secretary was authorized to use up to 2.5 percent of appropriations for evaluating programs funded under the PHS Act. The set-aside is also used to fund databases of the National Center for Health Statistics and programs that cut across CDC’s divisions. Presently, the divisions within CDC control most evaluation funding focused on their respective programs, but evaluation planning across CDC is currently under review. CDC recently created an Office of the Associate Director for Program which will have responsibility for supporting performance measurement and evaluation across CDC, among other duties. We interviewed staff from evaluation offices in three CDC divisions: Nutrition, Physical Activity, and Obesity (DNPAO); HIV/AIDS Prevention (DHAP); and Adolescent and School Health (DASH). These three divisions oversee cooperative agreements with state and local agencies and plan a portfolio of evaluations. CDC officials suggested that variation in evaluation planning in these three offices could provide insight into how CDC’s centers generally prioritize evaluations to conduct. DNPAO is charged with leading strategic public health efforts to prevent and control obesity, chronic disease, and other health conditions through physical activity and healthy eating. DNPAO supports the First Lady’s Lets Move! campaign to curb childhood obesity, which is considered an important public health issue but has a limited body of research on effective practices. The Nutrition, Physical Activity, and Obesity Program is a cooperative agreement between CDC and 25 state health departments to support a range of activities, including process and outcome evaluations. A consulting group of state evaluators, outside experts, and divisional representation advises DNPAO on proposing evaluation projects that would be useful to grantees and the divisions. DHAP, charged with leadership in helping control the HIV/AIDS epidemic, has a fairly large program evaluation branch that supports national performance monitoring and evaluation planning. The evaluation branch is responsible for monitoring CDC-funded HIV prevention programs, including 65 health units and 150 community organizations. Within the branch, the Evaluation Studies Team conducts specific evaluations of interest and in-depth process evaluations and outcome monitoring studies of selected HIV prevention interventions delivered by community-based organizations, state and local health departments, and health-care providers. In addition to the Division’s strategic plan, the governmentwide National HIV/AIDS Strategy for the United States, released in July 2010, informs evaluation planning. DHAP’s work is also shaped by an advisory committee and findings from an external peer review that provided input into programs and evaluations through the strategic plan. DASH is considered somewhat unique among CDC’s divisions because it is not focused on disease or exposure but has a mission to promote the health and well-being of a particular population— children and adolescents. DASH funds tribal governments and state, territorial and local educational agencies to address child and adolescent health issues, including nutrition, risky sexual behavior, tobacco prevention, school infrastructure, and asthma management. DASH typically funds evaluations in one health risk area each year. Its framework, Coordinated School Health, involves community, family, teachers, and schools in addressing a diverse set of health issues. It also partners with nongovernmental and community-based organizations to reach children who are not in school. DASH supports rapid evaluations to identify innovative programs and practices. These evaluations typically last 12 to 24 months and data are collected within a school calendar year. The evaluation team also has a small portfolio of evaluation research that includes large longitudinal randomized controlled trials that assess effectiveness over a 5-to-6-year period. The agencies we reviewed use a similar but informal evaluation planning process that involves collaboration between each agency’s evaluation office and program offices, external groups, and senior officials. Typically, the evaluation office leads an iterative two-step process to develop ideas into full-scale proposals by obtaining feedback from senior officials and considering available resources. The process varies across agencies in the breadth of the studies and programs considered, the use of ranked competitions, and the amount of oversight by senior officials. Figure 1 depicts the general process and the agencies’ significant differences. In most of the agencies we reviewed, evaluation planning generally starts and ends in the same fiscal year. General procedures for submitting and clearing annual spending plans structure the evaluation planning process at several of these agencies because the approved evaluations may involve external expenditures. The agencies must approve their evaluation plans by the start of the next fiscal year, or when appropriated funds become available, so that they can issue requests for proposals from the contractors that conduct the evaluations. Planning evaluations can include reviews by policy officials, such as deputy and assistant secretaries, and budget officials, such as an agency’s chief financial officer. For example, ACF’s evaluation staff develop evaluation proposals in the fall and early winter, before sending them to the agency’s assistant secretary for approval in the late winter to allow the assistant secretary to make approval decisions in time to meet the deadlines for awarding contracts in that fiscal year. Although most of the agencies finish their planning by the start of the next fiscal year, the process can start as late as July at CDC’s DNPAO or in the fall of the current fiscal year at ACF. Planning begins at each agency with internal coordination to define the goals and procedures for developing evaluation proposals. At ACF, this process begins informally, with evaluation and program staff meeting to discuss their priorities for the coming year. The other agencies we reviewed (including Education beginning in 2010) issue memorandums describing the planning process to the staff members involved. They may describe the staff members who will lead proposal-development teams, the substantive focus of the year’s process, the evaluation plan’s connection to spending plans, and the role of senior officials. They may also give a schedule of key deadlines. CDC’s DASH distributes a broader call for project nominations to agency staff members and researchers, state and local education agencies, and other program partners. In recent years, the call has specified the type of interventions the division seeks to evaluate, stated deadlines for submitting nominations, and solicited information from nominators about particular interventions. CDC’s DNPAO issues a call for proposals that addresses the process and broad criteria for project selections that can involve many people and proposals. The calls at each agency are informal planning documents, however, as no agency we reviewed has an official policy that specifies the process for developing and selecting evaluations. Having developed informal processes over time, senior officials, and evaluation and program office staff have a common understanding of how they will develop, review, and select evaluations. After the agencies identify their planning goals and steps, the evaluation and program staff begin to develop evaluation proposals. At some agencies, the program staff may develop the initial proposals independently of the evaluation staff, in response to the same call for proposals. The program staff may later consult with the evaluation staff to improve the proposals before they are reviewed further. This process is common in the CDC divisions we reviewed, where the evaluation staff are located inside program offices dedicated to particular health issues, so both program and evaluation staff may individually or jointly submit proposals for consideration. At other agencies, the evaluation staff meet with the program staff specifically to discuss ideas for evaluation and then develop initial proposals from the input they receive. The evaluation staff at one of these agencies said they incorporate the priorities, questions, and concerns the program staff conveyed from their day-to-day experience into evaluation planning and that collaboration helps ensure later support for the approved evaluations. Alternatively, HUD’s evaluation unit includes program staff on the teams that develop proposals in specific policy areas, such as fair housing and homelessness. The program offices also contribute to the initial proposals by providing comments to senior officials. At all the agencies, the evaluation staff use their expertise in designing social research and assessing the reliability of data, among other skills, to ensure the quality and feasibility of proposals. In addition to consulting internal program staff, most of the agencies we examined consult external groups to obtain ideas for evaluation proposals. Evaluation staff members cited a number of reasons for consulting external groups in developing proposals: the ability to identify unanswered research questions, coordinate evaluations across federal agencies, uncover promising programs or practices to evaluate, and inform strategic goals and priorities. Some evaluation staff reported consulting external groups as they develop program priorities and strategic plans, which they cited as criteria for planning evaluations. Over the past 2 years, PD&R has participated in a philanthropic foundation-funded partnership with research organizations that conducted several research projects to help inform the Department’s development of an evidence-based housing and urban policy agenda. Other staff said that they consult with state and local program partners, such as state welfare offices, to identify potentially useful projects. External groups have formal roles in developing proposals at two agencies. CDC’s DASH directly consults with external researchers, state and local education officials, and school health professionals for nominations of promising interventions to evaluate. In planning for fiscal year 2011, HUD asked the public to submit ideas for evaluation on its “HUD User” Web site. At most of the agencies, however, external groups do not explicitly develop evaluation proposals. For example, ACF staff said they informally consult with researchers about possible evaluation topics, partly in regular research conferences, but they do not ask their advisory panels or individual researchers to review specific evaluation proposals. In recent years, PD&R also contacted the office of HUD’s Inspector General for evaluation ideas that build on that office’s work. Generally, the agencies review and approve evaluation proposals in two steps. First, evaluation or program staff members develop ideas or brief concept papers for initial feedback from senior officials. The feedback can involve a series of proposal development meetings, as at Education and ACF, where senior officials give staff members strategic direction to develop and refine their proposals. Alternatively, senior officials may review all draft proposals that the evaluation and program staff have developed independently, as at HUD and CDC’s DNPAO and DHAP. Initial feedback helps prevent staff from investing large amounts of time in proposals that would have a small chance of being approved. The feedback expands proposal development beyond the evaluation and program offices and helps ensure that the proposals support the agency’s broader strategic goals. Second, once the initial proposals are sufficiently well developed, senior officials review and select from a group of revised, full-scale proposals. These may contain detailed information about cost and design for later use in the contracting process. Evaluation officials at ACF and HUD select from the pool of revised proposals those they wish to present to agency leaders, such as the secretary or assistant secretary, for final approval. Branch leaders at CDC’s DNPAO and DHAP choose a group of proposals to compete for division resources against proposals from other branches within their divisions. Review panels rank-order all proposals (discussed below), and then division leaders decide, based on the rankings and available resources, which proposals the division will fund. In fiscal year 2011, Education staff plan to present to senior officials the entire proposed evaluation portfolio, identifying how evaluation studies address key questions and agency priorities. Some of the agencies we reviewed focus specifically on planning program evaluations, while others use the same annual process to plan a variety of analytic studies. The central evaluation offices at ACF, Education, and HUD perform a continuum of research and evaluation studies that may include collecting national survey data, conducting policy analyses, and describing local program activities, among other activities. These agencies use the same process to make funding decisions across these various analysis proposals, which allows them to weigh the pros and cons of evaluation against other information needs when sufficient funds are available. Consequently, program evaluations may compete with other types of studies that require specific funding each year. Although the evaluation branch of CDC’s DNPAO provides a narrower range of services, the division uses a similar, unified process to decide how to develop proposals for all evaluations and research activities. In contrast, DASH plans their different types of studies separately. It uses one annual process to develop evaluation proposals for promising practices and interventions often implemented by grantees. It uses a different process to develop “evaluation research” proposals, which evaluation staff defined as national-level evaluations or long-term studies of program impact, often involving randomized controlled trials. By considering these types of study separately, DASH does not require longer- term evaluations to compete with shorter-term studies for the same funds. Programs compete against one another for evaluation resources at some but not most of the agencies we reviewed. The scope of evaluation planning at one group of agencies is limited to the same programs or policy areas each year. These agencies have designed their planning processes to select not programs to evaluate but evaluation questions to answer in a program area. For example, the ACF evaluation staff indicated that they identify important questions for each program with evaluation funding and then allocate funds to the most important questions for each program. Consequently, the agency typically conducts evaluations in programs with evaluation funds (such as TANF) every year but has not evaluated programs which do not have evaluation funding (such as the Community Services and Social Services Block Grants). HUD and, to a certain extent, two CDC divisions seek to identify which programs are important to evaluate as well as what questions are important to answer about those programs. Agency staff have the flexibility to direct resources to the programs that they believe most need evaluation. HUD evaluation staff said that this broad scope allows them to build a portfolio of evaluations across policy areas and serve the agency’s most pressing evaluation needs. Senior officials consider the value of proposals from all policy areas combined but make some effort to achieve balance across policy areas. Only CDC’s divisions hold formal, ranked competitions to review and select proposals. In each division, staff members or external panels rate and rank all evaluations the evaluation and program offices propose, once they have been fully developed. Senior leaders at CDC’s DHAP and DNPAO select proposals by rank and available funds. In addition, senior leaders at DNPAO rank and select proposals within each of its three policy areas: nutrition, physical activity, and obesity. Senior leaders at DASH consider information collected from site visits and interviews for a small group of semi-finalists that were selected based on the input of the external panel. CDC staff reported that CDC often uses ranked competitions to award grants and contracts across the agency. At the other agencies we reviewed, evaluation staff said that proposals are reviewed and selected in a series of discussions between the agency’s policy officials, such as assistant or deputy secretaries, and the senior leaders of its evaluation and program offices. None of these agencies reported formally ranking their proposed evaluations but, instead, qualitatively consider the relative strengths and weaknesses of each evaluation. Proposal review and selection in the CDC divisions involves less department-level input than at ACF, Education, and HUD. CDC’s evaluation staff reported that division leadership makes the final decision on evaluation projects and does not need the approval of the Office of the CDC Director or HHS officials, although a key criterion in project ranking and selection is often alignment with larger CDC, HHS, and national priorities. CDC is studying evaluation planning across the agency, however, and may increase central oversight in the future. The assistant secretary at ACF, not departmental officials, makes final approval decisions, but the evaluation staff reported consulting informally with staff of the Office of the Assistant Secretary for Planning and Evaluation (ASPE) when proposals are developed. The processes at Education and HUD are more centralized than at CDC or ACF. At these agencies, senior department officials—such as the secretary, deputy secretary, or assistant secretary—make the final selection decisions, after the evaluation and program staff have developed and reviewed the initial proposals. Beginning in fiscal year 2010, HUD staff indicated that the agency funded many of its evaluations from a departmental Transformation Initiative fund, whose board must approve proposed evaluations and other projects. Board members include the assistant secretaries of PD&R and Community Planning and Development, the Chief Information Officer, as well as the Director of Strategic Planning and Management. One agency does not strictly plan evaluations for the next fiscal year. CDC’s DASH staff plan evaluations that are funded during the current fiscal year rather than evaluations that will be funded in the next fiscal year. Local education agencies typically partner with the agency to conduct evaluations during the school year, when parents, students, and teachers are available to researchers. As a result, the agency cannot wait until funds are scheduled for appropriation in October or later, because their data collection plans and site selections must be final before the school year begins, typically in late August or early September. Education adjusted its evaluation planning guidance in 2010 to explicitly plan evaluations to be conducted in fiscal year 2011 as well as to inform its budget request for fiscal year 2012. The agency links its evaluation planning to the budget, partly to ensure that funding or authority will be available and that evaluations are aligned with program goals and objectives, congressional mandates, and the agency’s strategic priorities. In addition, Education has proposed, for reauthorization of the Elementary and Secondary Education Act, to submit a biennial evaluation plan to the Congress and establish an independent advisory panel to advise the department on these evaluations. These plans align well with the American Evaluation Association’s (AEA) recommendation, made in a recent policy paper on federal government evaluation, that federal agencies prepare annual and multiyear evaluation plans to guide program decision-making and consult with the Congress and nonfederal stakeholders in defining program and policy objectives, critical operations, and definitions of success. We found these mature agencies remarkably similar in the four general criteria they used for selecting evaluations to conduct during the next fiscal year: strategic priorities, program concerns, critical unanswered questions, and the feasibility of conducting a valid evaluation study. Another important consideration, in situations in which several program offices draw on the same funding source, was establishing balance across program areas. Most agencies indicated no hierarchy among these criteria. Rather, they considered them simultaneously to create the best possible portfolio of studies. The first criterion, strategic priorities, represents major program or policy areas identified as a focus of concern and reflected in a new initiative or increased level of effort. Strategic priorities might be expressed by a department or the White House as strategic goals or secretarial initiatives or by the Congress in program authorizations or study mandates. Under GPRA, agencies are expected to revise their strategic plans at least every 3 years, providing an opportunity to align their plans with current conditions, goals, and concerns. The plans can chart expectations for both program and evaluation planning. CDC’s DHAP officials described waiting for the White House release of the National AIDS Strategy in July to finalize their strategic plan and objectives and to prioritize evaluation activities that would address them. In addition to national priorities, division priorities are informed by their own research, surveillance, and program evaluation, identifying the subpopulations and geographic areas most affected by the disease. HUD’s PD&R conducts the national Housing Discrimination Study every 10 years, which provides a unique benchmark and input to the department’s long-term planning. Strategic priorities may also arise from congressional mandates. Education officials noted that the Congress generally mandates evaluations when it reauthorizes large formula grant programs, such as the national assessments of title I of the Elementary and Secondary Education Act, and that it has also mandated the evaluation of major new programs that might have great public interest or promise. They said that they schedule evaluations so that they will produce useful information for reauthorization, usually every 6 to 8 years. The second criterion, program-level concerns, represents more narrowly focused issues concerning an identified problem or opportunity. Evaluation staff reported that valuable ideas for evaluations often reflected the questions and concerns that arise in daily program operations. ACF noted that Head Start teachers’ reports of disruptive children who prevented other children from learning led to a large-scale evaluation of several potentially effective practices to enhance children’s socio-emotional development and teachers’ classroom management practices. Accountability concerns that OMB, GAO, and Inspector General reports raise may lead to follow-up studies to assess the effectiveness of corrective actions. For example, PD&R staff stated that after a GAO report criticized the Section 202 demonstration grant program for not building housing projects in a timely fashion, the Congress introduced a competition for grants to speed up development. A follow-up evaluation will assess whether timeliness has improved. Other evaluation questions may address crosscutting issues that influence program success, such as a provider’s ability to leverage resources or promote partnerships with other stakeholders. CDC’s DNPAO places a priority on proposals that develop collaborations with external partners and among operational units within the division. The third criterion, critical unanswered questions, reflects the state of knowledge and evidence supporting government policies and programs. For example, agency staff talk with advisory groups, academics and other researchers in their field to identify useful research and evaluation questions that could advance understanding of the field and improve practice. CDC staff indicated that filling knowledge gaps is a particularly important criterion for project selection, because some public health areas lack an extensive research base or evidence on effective practices. An OPRE senior official described OPRE staff as looking for compelling, essential questions of enduring interest. ACF programs attempt to solve persistent social problems, such as testing diverse strategies to promote employment retention and advancement for low-wage workers and current or former TANF recipients. Because formal impact evaluations of these efforts may take 5 or 6 years to complete, OPRE staff look for questions that are persistent and studies that are likely to advance knowledge. Gathering information on emerging, promising practices was a consideration, particularly where evidence of effective practice has not yet been demonstrated. This was particularly important to the CDC divisions, DNPAO and DASH, where the public health research base was limited and effectiveness evaluations of promising practices were needed to expand the pool of evidence-based interventions to offer grantees. The fourth criterion, evaluation feasibility, encompassed a range of pragmatic issues such as whether data were available and at what cost, whether the proposed evaluation could answer questions persuasively, and whether grantees had the interest and capacity to participate in evaluation. Naturally, agencies weigh their evaluation priorities in the context of their fiscal and budget constraints. Evaluators described determining whether the most important questions could be answered and the resources that would be needed to answer them. When “hard” data are lacking, some evaluators find that in-house exploratory work and investment in data gathering may be needed before scaling up to a contracted evaluation. Like the other evaluation units, PD&R compares the feasibility and cost of a study to alternative proposals. The evaluation staff noted that cost cannot be the sole criterion, however, because studies of some programs, such as the large block grants, are more resource intensive than the approaches available for studying other programs, such as housing voucher programs. When working with community-based organizations, agencies find grantee evaluation capacity can be very important. To ensure that the selected grantee is implementing the program faithfully, is ready for evaluation, and can collect valid and reliable data, CDC’s DASH staff conduct site visits to assess candidate projects on such issues as appropriate logical links between program components and expected outcomes, staff turnover, political conflicts, fiscal sustainability, and staff interest in and capacity to conduct the evaluation. ACF evaluators were pleased to note that many state and local TANF officials participate in OPRE’s annual welfare research conference, show interest in conducting evaluations, and have been willing to randomly assign recipients to new programs for research purposes. Although the agencies generally followed a similar process in developing their evaluation agendas, some agency characteristics or conditions appeared to influence their choices and may be important for other agencies to consider as they develop their own evaluation agendas. The four conditions we identified as strongly influencing the evaluation planning process were 1. the location of evaluation funding, whether with the program or 2. the scope of the evaluation unit’s responsibility within the agency; 3. how much the evaluators rely on program partners; and 4. the extent and form of congressional oversight over agency program evaluations. Where evaluation funds come from largely controls the selection of programs to evaluate. In ACF, CDC, and Education, authority and funds for evaluation are primarily attached to the programs, not to the evaluation office. This has implications for both how evaluation offices conduct their planning and for whether a program is likely to be evaluated. Where evaluation funds and authority are tied to the program, and funds are available, evaluation staff generally choose not which programs to evaluate but which research questions to answer. Thus, evaluators in ACF and Education work separately with each program office that has evaluation funds to develop proposals. In contrast, at HUD, when the evaluation office has uncommitted evaluation funds, selecting proposals can involve deciding between programs. Therefore, besides considering policy priorities and feasibility issues, HUD senior managers try to balance available evaluation funding across programs or policy areas after proposals are developed within program areas. This involves soliciting input from program office leaders on the preliminary agenda and discussing competing needs in the final selection process. CDC’s DNPAO, with its three distinct program areas— nutrition, physical activity, and obesity—made similar efforts to obtain a balanced portfolio by forming teams to rank order proposals separately and having senior division leaders consider program balance in selecting proposals. One consequence of tying evaluation funds and authority to programs is that programs that do not have their own evaluation authority may not get evaluated at all. Staff at ACF and Education told us that because their evaluation offices did not have significant discretionary funds for external contracts, they had not conducted any evaluations of several programs, even though they believed that some of those programs should be evaluated. Not discussing the pros and cons of evaluating a particular program can lead to inappropriately excluding some from evaluation. HUD officials noted that it was important to attempt to balance evaluation spending across program areas because, otherwise, some programs might be avoided as too difficult or expensive to evaluate. Education officials said they plan to address this issue by developing a departmental portfolio of strong evaluation proposals based on policy and management needs, without regard to existing evaluation authority, and then request funds for them. Then, in future legislative proposals, they plan to ask the Congress for more flexibility in evaluation funds to better meet the field’s needs. The agency evaluation offices we examined were located at different organizational levels, affecting the scope of their program and analytic responsibilities as well as the range of issues they considered. At CDC, the evaluation offices are generally within program offices, so they do not need a separate step for consulting with program staff to identify their priorities. Instead, the divisions solicit evaluation proposals from staff throughout the division. In the other agencies we examined, evaluation offices are either parallel to program offices (ACF) or at the departmental level (Education and HUD), which leads them to consult more formally with the program offices during both development and selection. Location and scope of responsibilities also influenced evaluation approval. CDC’s divisions, with the narrowest scope among the units we examined, exerted considerable control over their evaluation funds and did not require approval of their evaluation agendas by either the director or the department. DASH did, however, report coordinating evaluation planning with other agencies and HHS offices on specific cross-cutting programs, and DHAP reported delaying its selection of evaluation proposals this past spring to coordinate with the new National AIDS Strategy. In contrast, at Education and HUD, where evaluation offices have departmental scope, final approval decisions are made at the department level. In the middle, OPRE selections are approved by the ACF assistant secretary and do not require departmental approval. Being responsible for a wide range of analytic activities also influenced an evaluation office’s choice of evaluations. Evaluators in the more centralized offices in ACF and HUD described having the flexibility to address the most interesting questions feasible. For example, if it is too early to obtain hard data on an issue, PD&R staff said that they might turn to in-house exploratory research on that issue. ACF staff noted that they often conducted small descriptive studies of the operations of state TANF programs because of the decentralized nature of that program. This flexibility can mean, however, that they must also consider the range of the program office’s information needs when developing their portfolio of studies. PD&R staff noted that they try to ensure that some studies are conducted in-house to meet program staff interest in obtaining quick turnaround on results. DNPAO aims to achieve a balanced portfolio of studies by ranking cross-cutting proposals within categories of purpose, such as monitoring or program evaluation. Education officials propose to create a comprehensive departmental evaluation plan that identifies the department’s priorities for evaluation and other knowledge-building activities, is aligned with their strategic plan, and will support resource allocation. Several of the evaluation offices we examined also provide technical assistance in performance monitoring and evaluation. While this may help strengthen relationships with program staff and understanding of program issues, the responsibility can also reduce the resources available for evaluation studies. All three CDC divisions require evaluations or performance monitoring from their grantees; therefore, providing grantees with technical assistance is a major activity for these evaluation offices. In DHAP and DNPAO, staff workload, including providing technical assistance, was cited among the resource constraints in developing evaluation proposals. ACF staff noted that if program offices prioritize their available funds on technical assistance and monitoring, there may not be enough to conduct an evaluation. In our cases, placing the evaluation office inside the program office (as in the CDC divisions we examined) was associated with conducting more formal proposal ranking. We considered several possible explanations for this: (1) staff adopted the competitive approach they generally take in assessing proposals for project and research funds, (2) a large volume of proposals required a more systematic process for making comparisons, or (3) the visibility of the selections created pressure for a transparent rationale. The first point may be true but does not explain why the other agencies are also deliberative in assessing and comparing evaluation proposals but do not rate them numerically. The two other explanations appear to be more relevant and may be related to the fact that evaluations are being selected within the program office and thus cover a relatively narrow range of options. CDC staff said that they did not need to formally rate and rank the three or four proposals they submitted for OMB’s Evaluation Initiative but might have done so had the number of proposals to consider been greater. DASH and DHAP issue broad calls each year for nominations of promising practices to evaluate and, thus, gather a large number of proposals to assess. Staff in DASH, which also solicits project nominations from the public, indicated that over time their process has become more formal, accountable, and transparent so that selections appear to the public to be more systematic and less idiosyncratic. Although information is limited, we believe that systematically rating and ranking proposals may be a useful procedure to consider case by case. The influence of nonfederal program partners on developing and selecting evaluation proposals was observed in most of the agencies we examined, although it did not vary much among them. The importance of program stakeholders to planning should be expected because these particular agencies generally rely on external partners—state and local agencies and community–based organizations—to implement their programs. However, the extent of coordination with external parties on evaluation planning seen here may not be necessary in agencies that are not so reliant on third parties. ACF’s evaluation staff pointed out that they cannot evaluate practices that a state or local agency is not willing to use. Efforts to engage the academic and policy communities in discussing ideas for future research at ACF and Education also reflect these agencies’ decades-long history of sponsoring research and evaluation. CDC’s DHAP and DNPAO also employ advisory groups, including CDC staff and external experts, to advise them on strategic planning and topics that will help meet the needs of their grantees, but only DASH involved external experts directly in assessing evaluation proposals. DASH evaluators assemble panels to assess nominations of sites implementing a promising practice; depending on the topic and stage of the process, these panels might include external experts and experts from across CDC or other agencies serving children and families. Program partners’ evaluation capacity is especially important to evaluation planning in the CDC divisions we examined because their evaluations tend to focus on the effectiveness of innovative programs or practices. Each year, DASH publicly solicits nominations of promising projects of a designated type of intervention and uses review panels and site visits to rank dozens of sites on an intervention’s strength and promise, as well as the feasibility of conducting an evaluation. Staff said that it was important to ensure that the grantee organization was stable and able to cooperate fully with an evaluation and noted that evaluation is sometimes difficult for grantees. Congress influences agencies’ evaluation choices in a variety of ways. Congress provides agencies with the authority and the funds with which to conduct evaluations and may mandate specific studies. The evaluation offices in ACF, Education, and HUD all noted their responsibility to conduct congressionally mandated evaluation studies in describing the criteria they used in evaluation planning. The CDC offices indicated that they did not have specific study mandates but, rather, authority to conduct studies with wide discretion over the particular evaluation topics or questions. Of course, in addition to legislatively mandating studies, the Congress expresses interest in evaluation topics through other avenues, such as oversight and appropriations hearings. DHAP evaluators noted that they receive a lot of public scrutiny and input from the Congress and the public health community that works its way into project selection through the division’s setting of priorities. Agency evaluators described a continuum of evaluation mandates, from a general request for a study or report to a list of specific questions to address. Education officials noted that the Congress generally mandates evaluations of the largest programs when they are reauthorized or new programs or initiatives for which public interest or promise might be great. Some evaluators noted that sometimes the Congress and agency leaders want answers to policy questions that research cannot provide. They indicated that, where legislative language was vague or confusing, they did their best to interpret it and create a feasible evaluation. In a previous study of agency studies’ not meeting congressional information needs, we suggested that expanding communication between congressional staff and agency program and evaluation staff would help ensure that information needs are understood and that requests and reports are suitably framed and adapted as needs evolve. Evaluators told us that whether and how much funding was attached to an evaluation mandate also influenced how the mandate was implemented. They said that when appropriate funding was available, they always conducted congressionally mandated evaluations. However, sometimes the amounts available do not reflect the size of the evaluation needs in a program. This was particularly a problem for small programs where a fixed set-aside of program funds for evaluation might yield funds inadequate for rigorous evaluation. Evaluators described a related challenge when evaluation authorities are attached to single programs which preclude pooling funds across programs. Such limitations on using evaluation funds could lead to missed opportunities to address cross-cutting issues. In cases where no additional funding was provided for legislatively mandated studies, agencies had to decide how and whether to fund them. Some agency evaluators told us that they generally conducted what they saw as “unfunded mandates” but would interpret the question and select an approach to match the funds they had available. This might mean that without funds to collect new data, a required report might be limited to simply analyzing or reporting existing data. HUD receives considerable congressional oversight of its research and evaluation agenda, reflecting congressional concern about its past research priorities and greater decision-making flexibility under the new Transformation Initiative. In 2008, a congressionally requested National Research Council review of HUD’s research and evaluation lauded most of PD&R’s work as “high quality, relevant, timely, and useful” but noted that its resources had declined over the previous decade, its capacity to perform effectively was deteriorating, and its research agenda was developed with limited input from outside the department. NRC recommended that, among other things, HUD actively engage external stakeholders in framing its research agenda. In response, PD&R solicited public suggestions online for research topics for fiscal year 2011 and beyond. In addition, HUD proposed a Transformation Initiative of organizational and program management improvement projects in 2009 and asked that up to 1 percent of its program budget be set aside in a proposed Transformation Initiative fund to support research and evaluation, information technology, and other projects. The House and Senate Appropriations Committees approved the fund (at somewhat less than the requested amount) with a proviso that HUD submit a plan for appropriations committee approval, detailing the projects and activities the funds would be used for. An effective evaluation agenda aims to provide credible, timely answers to important policy and program management questions. In setting such agendas, agencies may want to simultaneously consider the four general criteria we identified: strategic priorities, program concerns, critical unanswered questions, and the feasibility of conducting a valid study. In the short run, because agency evaluation resources are limited, ensuring balance in evaluations across programs may not be as important as addressing strategic priorities. However, developing a multiyear evaluation plan could help ensure that all an agency’s programs are examined over time. To produce an effective evaluation agenda, agencies may want to follow the general model we identified at the agencies we reviewed: professional evaluators lead an iterative process of identifying important policy and program management questions, vetting initial ideas with the evaluations’ intended users, and scrutinizing the proposed portfolio of studies for relevance and feasibility within available resources. Since professional evaluators have the knowledge and experience to identify researchable questions and the strengths and limitations of available data sources, they are well suited to leading a consultative process to ensure that decision makers’ information needs can be met. However, agencies may need to adapt the general model’s steps to match their own organizational and financial circumstances. For example, they may not need to formally rank proposals unless they have many more high-quality proposals than they can fund. They may find advantages to placing evaluation offices within program offices (for focusing on program needs, for example) and at higher levels (for addressing broader policy questions). Where analytic demands are significant and resources permit, they may find a combined approach best-suited to their needs. To ensure that their evaluations provide the information necessary for effective management and legislative oversight, evaluation offices are likely to need to seek out in advance the interests and concerns of key program and congressional stakeholders, especially program partners, and discuss preliminary proposals with the intended users. The Departments of Health and Human Services and Housing and Urban Development provided comments on a draft of this report, which are reprinted in appendixes I and II. HHS appreciated the attention that this report gives to the importance of strong prioritization processes for selecting evaluation studies and allocating resources to complete them, and was pleased that the practices of ACF and CDC in this area are models for emulation by others. It also noted that, given the diversity of purposes for evaluations, the optimal location and organization of evaluation activities will vary with the circumstances. This is consistent with our concluding observation that agencies may need to adapt the general model—including where to locate evaluation offices—to match their own organizational and financial circumstances. HUD agreed with our description of how it plans evaluations but was concerned that the report did not place enough emphasis on the appropriations process as a major influence on what projects it funds and when it can begin the contracting process. We have added text to note that the Congress influences the agencies’ evaluation processes through providing them with both the authority and funds with which to conduct evaluations, as well as mandating specific studies. Education, HHS, and HUD also provided technical comments that were incorporated where appropriate throughout the text. We are sending copies of this report to the Secretaries of Education, Health and Human Services, and Housing and Urban Development; the Director of the Office of Management and Budget; and appropriate congressional committees. The report is also available at no charge on GAO’s Web site at www.gao.gov. If you have questions about this report, please contact me at (202) 512- 2700 or kingsburyn@gao.gov. Contacts for our Office of Congressional Relations and Office of Public Affairs are on the last page. Key contributors are listed in appendix III. Nancy Kingsbury Managing Director Applied Research and Me , Ph.D. In addition to the person named above, Stephanie Shipman, Assistant Director; Valerie Caracelli; and Jeff Tessin made significant contributions to this report. American Evaluation Association. An Evaluation Roadmap for a More Effective Government. September 2010. www.eval.org/EPTF.asp Leviton, Laura C., Laura Kettel Khan, and Nicola Dawkins, eds. “The Systematic Screening and Assessment Method: Finding Innovations Worth Evaluating.” New Directions for Evaluation no. 125, 2010. National Research Council, Committee to Evaluate the Research Plan of the Department of Housing and Urban Development, Center for Economic, Governance, and International Studies, Division of Behavioral and Social Sciences and Education. Rebuilding the Research Capacity at HUD. Washington, D.C.: National Academies Press, 2008. Office of Management and Budget. Analytical Perspectives—Budget of the United States Government, Fiscal Year 2011. Washington, D.C.: Executive Office of the President, Feb. 1, 2010. Office of Management and Budget. Evaluating Programs for Efficacy and Cost-Efficiency. M-10-32 Memorandum for the Heads of Executive Departments and Agencies. Washington, D.C.: Executive Office of the President, July 29, 2010. www.whitehouse.gov/sites/default/files/omb/memoranda/2010/m10-32.pdf Office of Management and Budget. Increased Emphasis on Program Evaluations. M-10-01 Memorandum for the Heads of Executive Departments and Agencies. Washington, D.C.: Executive Office of the President, Oct. 7, 2009. www.whitehouse.gov/sites/default/files/omb/assets/memoranda_2010/m10- 01.pdf U.S. Department of Education, Office of Planning, Evaluation, and Policy Development. A Blueprint for Reform: The Reauthorization of the Elementary and Secondary Education Act. Washington, D.C.: March 2010. U.S. Department of Health and Human Services. Evaluation: Performance Improvement 2009. Washington, D.C.: 2010. Employment and Training Administration: Increased Authority and Accountability Could Improve Research Program. GAO-10-243. Washington, D.C.: January 29, 2010. Program Evaluation: A Variety of Rigorous Methods Can Help Identify Effective Interventions. GAO-10-30. Washington, D.C.: November 23, 2009. Continuing Resolutions: Uncertainty Limited Management Options and Increased Workload in Selected Agencies. GAO-09-879. Washington, D.C.: September 24, 2009. Results-Oriented Management: Strengthening Key Practices at FEMA and Interior Could Promote Greater Use of Performance Information. GAO-09-676. Washington, D.C.: August 17, 2009. Performance Budgeting: PART Focuses Attention on Program Performance, but More Can Be Done to Engage Congress. GAO-06-28. Washington, D.C.: October 28, 2005. Program Evaluation: OMB’s PART Reviews Increased Agencies’ Attention to Improving Evidence of Program Results. GAO-06-67. Washington, D.C.: October 28, 2005. Managing for Results: Enhancing Agency Use of Performance Information for Management Decision Making. GAO-05-927. Washington, D.C.: September 9, 2005. Performance Measurement and Evaluation: Definitions and Relationships. GAO-05-739SP. Washington, D.C.: May 2005. Program Evaluation: An Evaluation Culture and Collaborative Partnerships Help Build Agency Capacity. GAO-03-454. Washington, D.C.: May 2, 2003. Program Evaluation: Improving the Flow of Information to the Congress. GAO/PEMD-95-1. Washington, D.C.: January 30, 1995.
Amid efforts to improve performance and constrain spending, federal agencies are being asked to expand the use of rigorous program evaluation in decision-making. In addition to performance data, indepth program evaluation studies are often needed for assessing program impact or designing improvements. Agencies can also use their evaluation resources to provide information needed for effective management and legislative oversight. GAO was asked to study federal agencies with mature evaluation capacity to examine (1) the criteria, policies, and procedures they use to determine programs to review, and (2) the influences on their choices. GAO reviewed agency materials and interviewed officials on evaluation planning in four agencies in three departments with extensive evaluation experience: Education, Health and Human Services (HHS), and Housing and Urban Development (HUD). HHS and HUD agreed with the description of how they plan evaluations. HHS noted that the optimal location of evaluation units will vary with the circumstances and purpose of evaluations. HUD felt the draft report did not emphasize enough the influence of the appropriations process. GAO has added text to note its influence on evaluation planning. Education provided technical comments. Although no agency GAO reviewed had a formal policy describing evaluation planning, all followed a generally similar model for developing and selecting evaluation proposals. Agencies usually planned an evaluation agenda over several months in the context of preparing spending plans for the coming fiscal year. Evaluation staff typically began by consulting with a variety of stakeholders to identify policy priorities and program concerns. Then with program office staff they identified the key questions and concerns and developed initial proposals. Generally, the agencies reviewed and selected proposals in two steps: develop ideas to obtain initial feedback from senior officials and develop full-scale evaluation proposals for review and approval. The four general criteria these mature agencies use to plan evaluations were remarkably similar: (1) strategic priorities representing major program or policy area concerns or new initiatives, (2) program-level problems or opportunities, (3) critical unanswered questions or evidence gaps, and (4) the feasibility of conducting a valid study. The agencies' procedures differed on some points. External parties' participation in evaluation planning may reflect these agencies' common reliance on nonfederal program partners. Only the offices GAO reviewed in HHS' Centers for Disease Control and Prevention held formal competitions to rank-order proposals before submitting them for approval; in other agencies, senior officials assessed proposals in a series of discussions. When evaluation authority and funds are tied to the program, evaluators generally choose not which programs to evaluate but which research questions to answer. Sometimes this resulted in a program's never being evaluated. Evaluation units at higher organizational levels conducted a wider range of analytic activities, consulted more formally with program offices, and had less control over approvals. The Congress influences an agency's program evaluation choices through legislating evaluation authority, mandating studies, making appropriations, and conducting oversight. GAO concludes that (1) all four criteria appear key to setting an effective evaluation agenda that provides credible, timely answers to important questions; (2) most agencies could probably apply the general model in which professional evaluators iteratively identify key questions in consultation with stakeholders and then scrutinize and vet research proposals; (3) agencies could adapt the model and decide where to locate evaluation units to meet their own organizational and financial circumstances and authorities; and (4) agencies' reaching out to key program and congressional stakeholders in advance of developing proposals could help ensure that their evaluations will be used effectively in management and legislative oversight. GAO makes no recommendations.
Commerce has responsibilities in the areas of trade, economic development, technology, entrepreneurship and business development, environmental stewardship, and statistical research and analysis. In addition, Commerce provides management and monitoring of the nation’s resources and assets to support both environmental and economic health. Other essential operations conducted by Commerce include the constitutionally mandated decennial census, economic research leading to calculation of the gross domestic product and trade balances, stimulation of small businesses, and promotion of international trade. The Secretary of Commerce leads the department’s efforts, with fiscal year 2013 total budgetary resources of approximately $22.7 billion and over 40,000 employees worldwide. The Commerce OIG was established by the IG Act with its IG appointed by the President and confirmed by the Senate. The IG is under the general supervision of, and reports to, the Secretary of Commerce. The current IG was sworn into office on December 26, 2007, and leads a team of auditors, evaluators, investigators, attorneys, and support staff responsible for providing oversight of the department’s array of business, scientific, economic, and environmental programs and operations. The Commerce OIG helps to ensure that the department’s employees and others managing federal resources comply with applicable laws and regulations, and works to prevent fraud, waste, and abuse in program operations. The OIG monitors and tracks the use of taxpayer dollars in federally funded programs to keep Commerce officials and the Congress fully and currently informed about issues, problems, and deficiencies related to the administration of programs and operations and the need for corrective actions. Figure 1 illustrates the primary offices that make up the Commerce OIG. The Commerce OIG is primarily governed by the IG and the Office of Counsel, with the IG providing overall leadership and policy direction and the Office of Counsel providing legal guidance in support of the OIG’s mission. The Office of Audit and Evaluation (OAE) conducts audits and evaluations of Commerce programs and operations to help determine whether they are cost-efficient and cost-effective. The Office of Investigations (OI) helps to prevent and detect fraud, waste, and abuse by contractors and grantees, and addresses reported improprieties involving department employees. OI maintains a hotline to collect reports of allegations related to fraud, waste, and abuse in departmental programs and operations. From fiscal years 2011 through 2013, OAE issued 90 reports and OI closed 258 investigations. In addition, the OIG testified 13 times before congressional committees. The Commerce OIG had total budgetary resources of approximately $41 million in fiscal year 2013. These resources represent a significant decline when compared to fiscal year 2011 total budgetary resources of approximately $47 million, a decrease of about 13 percent. When compared to all other cabinet-level OIGs, the Commerce OIG had the lowest level of total budgetary resources for each of the 3 fiscal years. Also, while five of these other OIGs had a decline in total budgetary resources equal to or greater than that of the Commerce OIG, the Commerce OIG’s decline was greater than the 6 percent average decline for all other cabinet-level OIGs during the 3-year period. (See table 1.) The Commerce OIG had 137 authorized full-time equivalent staff (FTE) in fiscal year 2013. When compared with other cabinet-level OIGs, the Commerce OIG had the fewest authorized FTEs. In addition, the Commerce OIG had the largest decrease of authorized FTEs when compared to the other cabinet-level OIGs. Specifically, from fiscal years 2011 through 2013, the Commerce OIG’s authorized FTEs decreased from 171 to 137, or approximately 20 percent, while the other OIGs’ decreases ranged from no decrease to an approximate 17 percent decrease, for an average decrease of approximately 5 percent, as shown in table 2. The Commerce OIG reported approximately $543 million in monetary accomplishments from audits, evaluations, and investigations during fiscal years 2011 through 2013. As shown in table 3, when the Commerce OIG’s reported monetary accomplishments over the 3-year period are compared to its budgetary resources, the resulting average return on investment for each budget dollar was approximately $4.18 over the 3- year period. The results of the Commerce OIG’s audits and evaluations contributed approximately $401.8 million of the total monetary accomplishments reported by the OIG through potential savings as defined by the IG Act. Approximately $392.5 million, or about 98 percent, of this amount was attributable to four audit reports issued during the 3-year period. The OIG’s investigations provided about $141.5 million of reported total monetary accomplishments during the 3-year period as a result of fines and restitutions related to successful prosecutions and other court proceedings. The Commerce OIG and all other cabinet-level OIGs showed increases in their average return on each budget dollar. While the Commerce OIG’s return was within the range of the lowest and highest returns for all other OIGs for each fiscal year, its average return on each budget dollar of $4.18 over the 3-year period was less than the average of $22.64 for the other OIGs. In addition, the Commerce OIG’s return for each fiscal year was less than the average each year for the other cabinet-level OIGs, as shown in table 4. IGs have a unique role within their agencies to identify areas for improved economy, efficiency, and effectiveness through various oversight activities, including independent audits. During fiscal years 2011 through 2013, the Commerce OIG’s OAE provided audit coverage of Commerce’s largest bureaus and offices and completed mandated audits of the department’s (1) financial statements, as required by the Chief Financial Officers Act of 1990 (CFO Act), and (2) information security, as required by the Federal Information Security Management Act of 2002 (FISMA). In addition, the OIG provided audits of funds associated with the American Recovery and Reinvestment Act of 2009 (Recovery Act). The OIG also provided audits of the department’s management challenges that are defined by the OIG and reported in Commerce’s annual performance and accountability reports. However, the OIG’s oversight lacked audit coverage of the economy, efficiency, and effectiveness of programs specific to the department’s bureaus and offices with relatively small budgets, and not all applicable high-risk areas identified by GAO’s high-risk reports were subject to audit. During the 3-year period we reviewed, OAE issued 90 reports including mandatory audits, performance audits, evaluations, and memorandums, intended to provide oversight of Commerce’s 13 major bureaus and offices identified in the OIG’s semiannual reports.reports issued, or 93 percent, were directed to four bureaus and offices, and to department-wide issues managed by the Office of the Secretary. The four bureaus and offices had total budgetary resources in fiscal year 2013 of approximately $19.9 billion, or almost 88 percent of the Eighty-four of the department’s total budgetary resources of approximately $22.7 billion. (See app. II.) For the remaining eight bureaus and offices, OAE provided mandatory FISMA audits, Recovery Act audits, and one evaluation, but no performance audit coverage of the economy, efficiency, and effectiveness of their specific programs during the 3-year period we reviewed and for extended periods prior to our review. (See table 5.) OAE provided audits of department-wide activities mandated by specific statutes. Specifically, the CFO Act requires entities, such as Commerce, to have annual financial statements that are audited. These audits provide (1) an opinion on whether the financial information is fairly presented and in accordance with generally accepted accounting principles, (2) a report on internal control over financial reporting and (3) a report on compliance with provisions of applicable laws and regulations, contracts, and grant agreements. FISMA audits report on the controls over information security throughout the department. Also, OAE audits funds the department receives through the Recovery Act because of a higher risk for waste, fraud, and abuse related to these funds. Although these audits address certain internal controls and broad department-wide operations, without audits of the programs specific to each bureau and office, their economy, efficiency, and effectiveness are not fully addressed by the OIG. The Commerce OIG has addressed the effectiveness of relatively large programs through performance audits and evaluations. For the 3-year period we reviewed, the Commerce OIG completed a total of 18 performance audits and 17 evaluations that addressed the programs specific to the department’s four bureaus and offices that have relatively large budgets, and department-wide activities managed by the Office of the Secretary. However, the eight bureaus and offices with relatively smaller budgets received no performance audits to address the economy, efficiency, and effectiveness of their specific programs during the 3-year period. While these eight bureaus and offices are small relative to the department’s largest bureaus and offices, they represented approximately $2.4 billion or about 11 percent of the department’s total budgetary resources for fiscal year 2013 and are included in the OIG’s listing of the major programs that are to receive oversight. In addition, they make important contributions to maintaining a strong national economy by providing businesses and other organizations with reliable information, helping the United States compete in international trade, and assisting U.S. businesses. For example, the National Technical Information Service, with fiscal year 2013 total budgetary resources of about $85 million, has program responsibilities for collecting and preserving scientific, technical, engineering, and other business-related information from federal and international sources and for disseminating this information to the U.S. business and industrial research communities. Also, the Minority Business Development Administration, with total budgetary resources in fiscal year 2013 of about $28 million, is responsible for promoting the growth of minority business enterprises and their participation in the global economy through a range of activities. These activities include funding a network of centers that provides a variety of business assistance services. The eight bureaus and offices for which the Commerce OIG did not provide performance audit coverage of their specific programs’ economy, efficiency, and effectiveness also did not receive these audits during the years prior to the 3-year period we reviewed, as shown in table 6. To illustrate, the programs specific to the Bureau of Economic Analysis had received no performance audit coverage over an 8-year period from fiscal years 2005 through 2013. The remaining seven small bureaus and offices had gaps ranging from 3 to 13 years in the performance audit coverage of their specific programs. The Commerce OIG’s plans for providing audit coverage of the department’s bureaus and offices are based on an assessment of risk.The OIG develops risk ratings for each bureau or office based on an assessment of a series of questions that determine the presence of risk. These questions address budget size, types of programs and operations, compliance with laws and regulations, fraud risks, and management challenges. For purposes of the risk assessment the OIG ranks the bureaus as (1) high, or those with higher relative risk characteristics; (2) medium, or those with midrange relative risk characteristics; and (3) low, or those with lower relative risk characteristics. Using this categorization, the OIG then develops audits and evaluations for oversight. This approach has provided audits and evaluations of programs in the four largest bureaus and offices and audits required by specific mandates. However, the length of time between performance audit coverage of the department’s programs is not part of the OIG’s risk assessment when considering oversight of the economy, efficiency, and effectiveness of the smaller bureaus and offices. Also, smaller programs may not rank high when other risk factors are considered, and there is no rotation policy to ensure that these programs are reviewed periodically. In the years prior to the 3-year period we reviewed, the Commerce OIG provided periodic evaluations of the department’s bureaus and offices with relatively small total budgetary resources to help assess the performance of their programs. However, while similar in purpose to performance audits, evaluations are not specifically required by the IG Act and are not a substitute for audit coverage. In addition, there are fundamental differences in the standards of each that affect the breadth and depth of the reviews. Specifically, the IG Act requires that OIGs, including the Commerce OIG, follow the Comptroller General’s Government Auditing Standards when performing audits, and to the extent permitted by law and not inconsistent with Government Auditing Standards, professional standards developed by CIGIE for evaluations. A fundamental difference between the standards for audits and those for evaluations is the level of detail and requirements for sufficient, appropriate evidence to support findings and conclusions. Performance audits completed under Government Auditing Standards by design require more depth in their levels of evidence and documentation supporting the findings than is required for evaluations performed under CIGIE standards, which can lead to differences in the reliability and accuracy of the results. In addition, auditing standards require external quality reviews of audit practices, or peer reviews, on a 3-year cycle by reviewers independent of the OIG. However, neither the CIGIE standards for evaluations nor the Commerce OIG’s policies and procedures require such external reviews for evaluations. Without the information provided by periodic audits of the eight smaller bureaus and offices’ use of approximately $2.4 billion in fiscal year 2013 total budgetary resources, any weaknesses in their economy, efficiency, and effectiveness may not be fully known, increasing the risk of fraud, waste, abuse, or mismanagement. Since 1990, GAO has reported on government operations designated as high risk because of their greater vulnerabilities to fraud, waste, abuse, and mismanagement. Although Commerce was not identified as having a specific high-risk area, GAO’s February 2011 report identified the following five high-risk areas that were applicable government-wide: (1) protecting information systems and cyber critical infrastructures, (2) strategic human capital management, (3) managing federal real property, (4) management of interagency contracting, and (5) ensuring the effective protection of technologies critical to U.S. national security interests. GAO’s February 2013 high-risk update report continued to identify all these areas as high-risk except for management of interagency contracting, which was removed from the list because of significant progress made by the federal government in reducing the interagency contracting risk that led to GAO’s high-risk designation. In addition, GAO added two new high-risk areas that are relevant to Commerce programs: (1) mitigating gaps in weather satellite data and (2) limiting the federal government’s fiscal exposure by better managing climate change.However, not all of the applicable high-risk areas were considered in the Commerce OIG’s risk-based planning process and not all of these areas were included in the audit coverage provided during the 3-year period we reviewed, as shown in table 7. During fiscal years 2011 through 2013, the Commerce OIG conducted mandated FISMA audits that addressed the security of information systems, and a performance audit of Commerce’s human capital management. For the high-risk areas added in 2013, the OIG completed audits that addressed satellite data and started an audit in the area of managing climate change during fiscal year 2014. However, the OIG did not conduct audits in the high-risk areas of managing federal real property and ensuring the effective protection of technologies critical to U.S. national security interests during the 3-year period. After this period, the OIG completed an audit related to protecting technologies critical to U.S. national security in fiscal year 2014. Managing federal real property was not included in the Commerce OIG’s audits even though it remains a government-wide high-risk area because of long-standing problems such as overreliance on leasing, excess and underutilized property, and issues in protecting federal facilities. In addition, Commerce’s fiscal year 2013 financial statement audit identified the area as having a significant deficiency in internal control and concluded that while the National Oceanic and Atmospheric Administration has recognized the difficulties in accounting for its property and has implemented corrective actions, more improvements and additional oversight and training are needed to strengthen its controls over the significant property investment. Another high-risk area involves U.S. government programs to identify and protect technologies critical to U.S. interests including export control systems for defense articles and services and dual-use items, the Foreign Military Sales program, anti-tamper policies, and reviews of transactions that could result in control of a U.S. business by a foreign person. These programs are administered by multiple federal agencies, including Commerce, with various interests. GAO reported that each program has had its own set of challenges that are largely attributed to poor coordination within complex interagency processes, inefficiencies in program operations, and a lack of systemic evaluations of program effectiveness. The Commerce OIG did not provide audit coverage of this high-risk area during the 3-year period we reviewed, but as a result of a congressional request, the OIG completed an audit of the Bureau of Industry and Security’s licensing of exports related to these programs in September 2014. The Commerce OIG’s hotline policies and procedures were generally consistent with recommended hotline practices provided through CIGIE. However, our testing of a random sample of OIG hotline cases from fiscal years 2011, 2012, and 2013 identified numerous instances in which OIG staff did not follow the OIG’s formal hotline policies and procedures that we selected for review. Specifically, the OIG did not always follow its own hotline procedures with respect to (1) proper handling of complaints, (2) assignment of disposition codes, and (3) time frames for processing complaints. The OIG’s OI is responsible for all investigations, referrals, and other actions resulting from complaints alleging criminal, civil, or administrative misconduct related to Commerce’s programs, funds, and operations. The OIG’s Office of Compliance and Ethics is part of OI and is headed by a director who has a hotline staff responsible for the intake, processing, review, and preliminary research of all hotline complaints. In addition, the OIG has a complaint disposition board of OIG management officials and key special agents and investigators that is headed by the IG and determines the disposition of complaints.hotline process. The recommended hotline practices provided through CIGIE contain 10 areas that cover a broad scope of activities. Specifically, the guidance recommends (1) adequate hotline training for staff, (2) consistent handling of complaints, (3) use of technology for processing complaints, (4) analysis of complaints to identify any trends or systemic weaknesses, (5) regular meetings of hotline staff with OIG management, (6) use of a website to receive complaints, (7) educational information about whistle- blower protections on the OIG’s website, (8) outreach efforts to raise the profile of the hotline with federal employees, (9) participation in forums of hotline operators with other OIGs, and (10) management of the expectations of complainants. Consistent with the recommended hotline practices, the Commerce OIG developed formal hotline policies and procedures that address the intake, processing, and disposition of hotline complaints; has taken actions to provide investigative training for hotline staff; and makes use of technology to both process complaints and analyze them for emerging trends. The hotline staff shares this information during weekly meetings with OIG senior management to determine the disposition of pending complaints. The OIG has also established a website to receive complaints and provide educational information regarding the hotline, and it conducts outreach efforts throughout Commerce on the use of the hotline and how to report complaints. In addition, the OIG participates in several working groups and activities sponsored by CIGIE. Regarding management of the expectations of complainants, during our review the OIG developed a revised hotline policy in August 2013 that included a requirement to notify the complainant of the disposition of the complaint. However, the policy did not specifically require the OIG to inform the complainant whether status updates could be expected. Upon our further discussion with hotline staff, the OIG drafted a set of letters to communicate to complainants whether status updates should be expected to better manage the complainants’ expectations. We reviewed a random sample of 58 hotline cases drawn from each of 3 fiscal years and found that hotline staff often did not adhere to the OIG’s formal hotline policies and procedures that we selected for review. Specifically, we estimate that (1) about 76 percent of hotline complaints in fiscal year 2011 had at least one exception to the OIG’s hotline policies and procedures, (2) about 84 percent of hotline complaints in fiscal year 2012 had at least one exception, and (3) about 62 percent of hotline complaints in fiscal year 2013 had at least one exception. When projected over all 1,294 complaints received by the OIG in fiscal year 2013, we estimate that about 802 of those hotline complaint cases had one or more inconsistencies with the OIG’s hotline policies and procedures that we tested. The exceptions we identified included the following: (1) unique, sequential case numbers were not assigned to each complaint; (2) disposition codes were not assigned in accordance with written policies; and (3) time frames for processing complaints were not followed by staff receiving complaints. Table 9 shows the estimated percentage of exceptions we found in each of these areas for fiscal years 2011 through 2013. In testing our 2011 sample, we found exceptions to the Commerce OIG’s policy to assign unique, sequential case numbers to hotline complaints. This number is the official means of identifying a case and its corresponding complaint documentation. During 2011, the OIG received a large number of complaints related to sweepstakes scams. Rather than log and dispose of each complaint individually, as called for in the policy, management instructed staff to compile complaints and enter them into the electronic case management system under one overall case number. Based upon our testing, we estimate that 9 percent of the cases from fiscal years 2011 through 2013 were not assigned a unique, sequential case number, contrary to policy.have been the same, they were from different complainants with allegations directed at different persons and required unique case numbers. Compiling similar but unique complaints into a single case While the nature of the allegations may number inhibits the OIG from efficiently and effectively tracking complaints received by the office and increases the risk that complaints will be handled inconsistently. Based on our sample selected from the 3 years of hotline complaints, we estimate that about 37 percent of disposition codes were not assigned to complaints in accordance with the definitions in the OIG’s hotline policies and procedures in effect at the time. Most, but not all, of the complaints that we identified as having the wrong code assigned were ultimately treated in accordance with the disposition procedures called for in the hotline policies. However, when complaints are not assigned the correct code, the risk that a hotline complaint may not be appropriately referred increases, and the potential exists that wrongdoing associated with complaints may not be fully addressed. To illustrate, one complaint was received from a private citizen who reported the suspected falsification of federal student loan documents. The OIG’s hotline review resulted in a decision to assign a disposition code of “U” because the matter was unrelated to Commerce and thus no further action was taken. However, even though the complaint was unrelated to Commerce, enough information was provided for the complaint to have been assigned a code of “O” and forwarded to an external entity or federal agency that manages student loans for possible action, consistent with the OIG’s hotline policies. OIG staff stated that during fiscal year 2012 the Assistant IG for Investigations changed the disposition codes and consolidated several codes. This change was abandoned in fiscal year 2013 and was never reflected in written policies because it was determined to have negatively affected the OIG’s ability to report hotline complaint information. This change of practice caused many of the exceptions we found related to improper disposition codes in our testing of the 2012 sample. The hotline staff agreed that the disposition codes we identified as exceptions were not assigned in accordance with written policy. The OIG’s hotline policies and procedures in effect at the time of our testing required hotline staff to process complaints within specified time frames to ensure that the complaints were handled timely. After receiving the complaints, hotline staff were to enter them into the electronic case management system within 24 hours and to assign a disposition code to each case within 5 business days of receipt. Cases to be disposed of as a referral required staff to refer each case within 5 business days after determination of its disposition. Over the 3-year period, (1) an estimated 7 percent of cases were not entered within 24 hours, (2) an estimated 42 percent of the complaints were not assigned a disposition code within the required time frames, and (3) an estimated 4 percent of the cases assigned for referral to another Commerce component were not sent within 5 days.sample was 23 days. If complaints are not handled in a timely manner, the hotline operation’s effectiveness and efficiency may be diminished. The longest period for assigning a disposition code in our CIGIE’s Quality Standards for Federal Offices of Inspector General requires OIGs to establish and implement internal control activities to ensure that their directives are carried out. However, while the Commerce OIG has hotline policies and procedures, it has not developed certain internal control activities to help provide reasonable assurance that its own policies and procedures are consistently implemented. Specifically, the OIG does not provide ongoing monitoring of its hotline policies and procedures in the course of normal operations, which is to be performed continually and ingrained in the OIG’s operations, to help reasonably assure that those policies and procedures are being followed. The CIGIE standards for monitoring internal controls include self- assessment evaluations, periodic reviews of control design, and direct testing of internal controls. Because OIGs evaluate how well agency programs and operations are functioning, they have a special responsibility to reasonably assure that their own operations are as effective as possible. Hotline complainants, including whistle-blowers, play an important role in safeguarding the federal government against fraud, waste, and abuse, and their willingness to come forward can contribute to improvements in government operations. Without effective internal control activities for its hotline operations, the Commerce OIG is vulnerable to increased risk that complaints of fraud, waste, abuse, or mismanagement received through its hotline may not be handled effectively. OPM’s FEVS asks questions of federal employees related to specific topic areas, the responses to which determine how well the federal government is managing its human resources and give senior managers employee perspectives on agency management. The surveys for fiscal years 2012 through 2014 included the same 71 questions each year that provide information on employees’ views regarding (1) their work experience, (2) their work unit, (3) their agency, (4) their supervisor, (5) leadership, and (6) overall satisfaction. In the 2012 FEVS results, the Commerce OIG employee responses to 43 of 71 survey questions had a higher percentage of negative responses than the government-wide average. (See app. III.) The responses for the remaining 28 questions were either at the government-wide average or had lower percentages of negative responses compared to the government-wide average. The Commerce OIG’s 2012 FEVS results were used by the Partnership for Public Service to rank the Commerce OIG at 291 out of 292 other subcomponent offices in the federal government. Through the National Defense Authorization Act for Fiscal Year 2004, the Congress established a requirement for agencies to annually survey their employees to assess employee satisfaction and employees’ views of leadership and management practices. The FEVS measures employees’ perceptions of whether, and to what extent, conditions that characterize successful organizations are present in their agencies. The survey (1) provides general indicators of how well the federal government is running its human resources management systems, (2) serves as a tool for OPM to assess individual agencies and their progress on strategic management of human capital and (3) gives senior managers critical information to assist them in determining what they can do to improve their agency’s effectiveness. The FEVS is administered to full-time and part-time, permanent, non-seasonal employees of departments and large agencies, small/independent agencies, and subcomponents of larger agencies, such as the Commerce OIG, that choose to participate in the survey. As part of the annual requirement to survey their employees, federal agencies are to assess (1) leadership and management practices that contribute to agency performance and (2) employee satisfaction with leadership policies and practices, work environment, rewards and recognition for professional accomplishment and personal contributions to achieving organizational mission, opportunity for professional development and growth, and opportunity to contribute to achieving organizational mission. OPM regulations for implementing the mandatory employee survey prescribe survey questions that must appear on each agency’s employee survey. The FEVS is completed annually and includes the same questions each year, but OPM may include agency- specific questions at the request of the agency. The Commerce OIG made efforts to address the issues identified in the 2012 FEVS results and the FEVS results for it in subsequent years have improved, as did the ranking of the Commerce OIG by the Partnership for Public Service for fiscal years 2013 and 2014. The OIG’s 2013 FEVS results had 9 out of 71 survey questions with a higher percentage of negative responses than the government-wide average and the 2014 results had 11 such questions. However, even with these improved FEVS responses, the Partnership for Public Service ranked the Commerce OIG at 281 out of 300 subcomponent agencies in fiscal year 2013 and at 262 out of 315 subcomponent agencies in fiscal year 2014, indicating that OIG employee responses remain significantly more negative than the government-wide average for specific questions. OPM provides guidance to agencies on how to address the FEVS responses that indicate the need for corrective actions. The guidance includes steps to address areas where weaknesses are indicated and develop an action plan with measures of success. These steps direct the agency to (1) identify the issues, (2) set goals, (3) identify staff resources to assist, (4) develop the action plan, (5) implement the action plan, and (6) monitor and evaluate the implementation. The Commerce OIG followed much of the OPM guidance to address the 2012 FEVS results but lacked an action plan with measures of success. Specifically, the OIG evaluated the survey questions to identify issues; set goals that addressed leadership, morale, training, and communication; identified staff to participate in the efforts to address the FEVS responses; and developed 86 recommendations to meet the identified goals. In addition, the OIG took actions to address and monitor the completion of most of the recommendations developed by its staff. However, the OIG did not develop an action plan with measurements to determine whether the employees’ concerns were successfully addressed. GAO’s prior work has identified attributes relevant to these action plans, including a measurable target—to determine whether performance measures have quantifiable, numerical targets or other measurable values, where appropriate. Action plans with attributes of successful metrics could allow the OIG to better determine whether its goals were successfully met. Some of the FEVS questions with a higher percentage of negative responses than the government-wide average have particular significance when they occur in an OIG. OIGs have a responsibility to report on current performance and accountability and to foster sound program management to help ensure effective government operations. When an OIG’s own operations are not as effective as possible, recommendations to other offices may not be viewed as credible. To illustrate, the responses to the following FEVS questions had a higher percentage of negative responses than the government-wide average for all 3 years we reviewed: “My talents are used well in the workplace.” “I know what to do to be rated higher.” “I recommend my organization as a good place to work.” “Considering everything how satisfied are you with your organization?” Also, in the 2014 FEVS results the OIG employees had a higher percentage of negative responses to the following question: “Prohibited personnel practices are not tolerated.” Allegations of prohibited personnel practices by the OIG’s senior leadership during fiscal year 2011 were investigated by the Office of Special Counsel (OSC). In its September 2013 report, the OSC concluded there was strong evidence of retaliation against OIG employee whistle-blowers by two senior OIG leaders. These investigative results are even more significant given the Commerce OIG’s role in receiving and processing allegations of prohibited personnel practices and other whistle-blower complaints for the department. The Commerce IG stated that in his opinion, the timing of the investigation and related issues had a significant effect on the FEVS results. Although there has been improvement, a significant percentage of negative responses to FEVS questions addressing important aspects of good management remain. Without an action plan as recommended by OPM, with measurements of success, the OIG may continue to have responses to FEVS questions that are significantly more negative than the government-wide average. These results could indicate lost productivity and a diminished ability of the office to effectively carry out its statutory duties. The OIG’s audit oversight resulted in an emphasis on the four Commerce bureaus and programs that have the largest budgets and the Office of the Secretary for department-wide audits, with no performance audit coverage of the economy, efficiency, and effectiveness of the programs specific to the eight smaller bureaus and offices. Because of the significance of the programs in the smaller bureaus and offices, the lack of performance audit coverage specific to these programs places them at an increased risk of not addressing potential issues of economy, efficiency, and effectiveness in handling the taxpayer dollars they receive. The OIG’s planning for audit coverage is based on risk and considers a number of factors to determine the extent of its audits and evaluations. However, the OIG does not consider the length of time that has passed since the programs specific to the bureaus and offices last received performance audits addressing their economy, efficiency, and effectiveness. In addition, the OIG’s audit plans did not fully consider all GAO high-risk areas applicable to Commerce, which resulted in areas that were not subject to audit. The OIG lacks certain effective internal control activities for its hotline operation, which made the hotline vulnerable to inconsistent and ineffective management. Because OIGs evaluate how well agency programs and operations are functioning, they have a special responsibility to provide reasonable assurance that their own operations are as effective as possible. However, without effective monitoring of internal controls for its hotline operations, the OIG has limited assurance that its hotline policies and procedures will be consistently followed and that the complaints received by the hotline will be handled effectively. The OIG’s FEVS results improved during fiscal years 2013 and 2014, but the remaining FEVS questions with a higher percentage of negative responses than the government-wide average indicate that employee concerns continue and indicate the potential for limited OIG effectiveness. The OIG has taken steps to address the FEVS responses of its employees in past years that are mostly consistent with OPM guidance. However, the OIG did not develop an action plan with measures of success to reasonably assure that employee concerns are effectively addressed. To provide increased performance audit coverage of Commerce’s bureaus and offices, the Commerce IG should augment the OIG’s risk- based audit planning process to consider (1) a rotation of performance audit coverage among the smaller bureaus and offices to help ensure that the economy, efficiency, and effectiveness of their programs are periodically reviewed and (2) all applicable high-risk areas identified by GAO. To provide reasonable assurance that written hotline policies and procedures are consistently followed and complaints are handled effectively, the Commerce IG should enhance the existing internal control activities for the OIG’s hotline operations through monitoring, including self-assessment evaluations conducted by the hotline unit of itself, periodic reviews of control design, and direct testing of internal controls. To reasonably assure that the concerns of OIG employees expressed in their FEVS responses are effectively addressed, the Commerce IG should develop an action plan that includes measures of success. We provided a draft of this report to the Commerce IG for comment. In his written comments, reproduced in appendix IV, the Commerce IG concurred with our recommendations and discussed actions the OIG has planned or started to address them. In addition, the OIG provided technical comments, which we incorporated as appropriate. The IG stated that the OIG’s fiscal year 2016 risk assessment is under way and the OIG is considering performance audit coverage issues with respect to Commerce’s smaller bureaus and operating units. The IG also stated that the OIG is developing a quality assurance program that will address self- assessments, reviews of the controls of the hotline operations, and periodic tests of those controls. In addition, the IG stated that after the results of the next FEVS are released, the OIG will develop an updated action plan that includes metrics designed to measure success. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; Inspector General of the Department of Commerce; Secretary of Commerce; Deputy Director for Management, Office of Management and Budget; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2623 or davisbh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix V. To provide information on the budget, staff resources, and accomplishments of the Department of Commerce (Commerce) Office of Inspector General (OIG) and the other cabinet-level OIGs, we obtained access to the Office of Management and Budget database of budget information for fiscal years 2011 through 2013 for comparison among these OIGs. We also obtained information on the monetary results from audits, evaluations, and investigations as reported by the Commerce OIG and the other cabinet-level OIGs in semiannual reports to the Congress for the 3-year period. We calculated a rate of return on the Commerce OIG’s budgets by dividing the reported monetary accomplishments into the OIG’s total budgetary resources for each of the 3 years. We also calculated the average overall rate of return of the other cabinet-level OIGs and compared their average results to those of the Commerce OIG for the same 3-year period. We reviewed the OIG’s effectiveness in providing audit oversight coverage of Commerce from fiscal years 2011 through 2013 by identifying the audits, evaluations, and other activities reported by the OIG in its semiannual reports for the 3-year period. We compared the OIG’s audit reports to the (1) management challenges at Commerce identified by the Commerce OIG and reported in the department’s annual performance and accountability reports, (2) bureaus and offices identified by the OIG’s semiannual reports that administer Commerce’s major programs and activities, and (3) Commerce-related high-risk areas identified by GAO and reported in updates to our high-risk series. We also reviewed the Commerce OIG’s planning documents to determine how the OIG selects the areas to review in providing OIG oversight of Commerce’s programs and activities. Specifically, we reviewed audit and evaluation plans for fiscal years 2011 through 2013, OIG risk assessments of Commerce’s programs and offices, and the OIG’s strategic (action) plans. We relied on information reported in the OIG’s semiannual reports to determine the oversight coverage and accomplishments. To verify the reliability of these data, we obtained all 90 Office of Audit and Evaluation’s reports for the 3-year period and compared the reported results with the information in the semiannual reports. We also identified the statement of quality used by the OIG in each report, which attests to the accuracy of the information. In addition, we obtained an understanding of the OIG’s internal process to help ensure that the semiannual report information is accurate. Based on our procedures, we concluded that these data were reliable enough for the purposes of this report. To review the effectiveness of the Commerce OIG in addressing complaints and allegations of wrongdoing received by its hotline during fiscal years 2011 through 2013, we compared the OIG’s policies and procedures with recommended hotline practices provided through the Council of Inspectors General on Integrity and Efficiency. We also interviewed hotline staff to determine the OIG’s internal control process for ensuring that its hotline policies and procedures were followed. In addition, we selected a random sample of closed hotline complaints received by the OIG for each fiscal year of the 3-year period and determined the extent to which the OIG followed its own policies and procedures. We selected for review relevant OIG hotline policies and procedures that we determined could have a bearing on the effectiveness of the hotline operations and could be verified. For example, we tested controls involved in the receipt, disposition, and processing of complaints, such as assigning unique numeric identifiers, sufficiency of supporting case file documentation, and timeliness of the disposition of complaints. To test the implementation of procedures that differed in each year, we selected samples from fiscal years 2011, 2012, and 2013. Specifically, we selected a simple random sample of 58 individual hotline complaints in each of the 3 fiscal years. Each sample was designed so that if we found 0 exceptions in our review, the estimated exception rate for that fiscal year would be below a tolerable error of 5 percent at the 95 percent level of confidence. For example, if we did not identify any exceptions in our testing for a particular procedure, in a given year, then our conclusion would be that the OIG’s implementation was effective for that procedure. Because we followed a probability procedure based on random selections, each sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 7 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. To review the Commerce OIG’s effectiveness in addressing issues identified by the OIG and based on its employees’ responses to the Office of Personnel Management’s (OPM) annual Federal Employee Viewpoint Survey (FEVS), we obtained the Commerce OIG survey responses to the 71 questions from OPM for fiscal years 2012, 2013, and 2014 that addressed the employees’ work experience, work unit, agency, supervisors, leadership, and satisfaction. Additional survey questions that addressed work/life programs, including telework, and demographics were not included in our review because we focused on the effectiveness of the OIG’s operations related to audits, evaluations, and other oversight efforts. The FEVS results include response percentages for each survey question. The definitions for the positive, neutral, and negative response percentages vary in the following ways across the three primary response scales used in the survey: Strongly Agree and Agree/Very Satisfied and Satisfied/Very Good and Good; Neither Agree nor Disagree/Neither Satisfied nor Dissatisfied/Fair; Disagree and Strongly Disagree/Dissatisfied and Very Dissatisfied/Poor and Very Poor. We estimated the 95 percent confidence intervals around the percentage of negative responses to each question in the Commerce OIG FEVS data. We then analyzed the data by comparing the Commerce OIG survey response estimates with the government-wide average percentage of negative responses for all 71 survey questions in each year. If the upper bound for the confidence interval around the estimate for the Commerce OIG was below the government-wide average percentage of negative responses, we counted the difference as statistically significant. We summarized the areas of reported weakness as indicated by OIG employees’ estimated percentage of negative responses that were statistically higher than the government-wide averages. To review the actions taken by the OIG to address the fiscal year 2012 FEVS results, we obtained internal communications and interviewed OIG management officials. In addition, we obtained and analyzed the products of OIG working groups established to identify and correct the weaknesses indicated by the fiscal year 2012 survey and compared the OIG’s actions with OPM guidance. $9,157 Develops domestic and international telecommunications and information policy for the executive branch; ensures the efficient and effective management and use of the federal radio spectrum; and performs telecommunications research, engineering, and planning. 5,926 Promotes environmental stewardship through (1) the National Ocean Service; (2) the National Marine Fisheries Service; (3) the Office of Oceanic and Atmospheric Research; (4) the National Weather Service; (5) the National Environmental Satellite, Data and Information Service; and (6) Program Support. 2,931 Provides examinations of patent and trademark applications and guides domestic and international intellectual property policy and encourages innovation and scientific and technical advancement of U.S. industry through the preservation, classification, and dissemination of patent and trademark information. 1,905 Provides benchmark measures of the U.S. population, economy, and governments and provides current measures of the U.S. population, economy, and governments. The bureau’s cyclical programs include the Economic Census and the Census of Governments, conducted every 5 years, and the Decennial Census program, conducted every 10 years. $1,112 Promotes U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that improve economic security and quality of life. Develops and disseminates measurement techniques, reference data, test methods, standards, and other technologies and services needed by U.S. industry to compete in the 21st century. 495 Creates prosperity by promoting trade and investment, ensuring fair trade and compliance with trade laws and agreements, and strengthening the competitiveness of U.S. industry. 456 Supports the department’s goal to maximize U.S. competitiveness and enable economic growth for U.S. industries, workers, and consumers with the objective to foster domestic economic development as well as export opportunities. 107 Advances U.S. national security, foreign policy, and economic objectives by ensuring an effective export control and treaty compliance system and by promoting continued U.S. strategic technology leadership. 101 Comprising the Census Bureau and the Bureau of Economic Analysis, provides decision makers with timely, relevant, and accurate economic and statistical information related to the U.S. economy and population. 85 Collects and preserves scientific, technical, engineering, and other business- related information from federal and international sources, and disseminates it to the U.S. business and industrial research communities. 28 Promotes the ability of minority business enterprises to grow and to participate in the global economy through a range of activities that include funding a network of centers that provide these businesses a variety of business assistance services. $320 Develops and implements policy affecting U.S. and international activities as well as internal goals and operations of the department. 41 Provides independent oversight of the department’s bureaus, offices, and programs. FEVS questions with significantly negative OIG employee responses indicated by “X” X FEVS questions with significantly negative OIG employee responses indicated by “X” X FEVS questions with significantly negative OIG employee responses indicated by “X” Satisfied with your organization Total Legend: FEVS = Federal Employee Viewpoint Survey; OIG = Office of Inspector General. In addition to the contact named above, Jackson Hufnagle (Assistant Director), Carl Barden, Lisa Boren, Nadine Ferreira, Jacquelyn Hamilton, Althea Sprosta, Taya Tasse, and Clarence Whitt made key contributions to this report.
Congressional committees and Commerce leaders rely on the OIG to provide oversight of the agency's wide range of responsibilities. GAO was asked to review the effectiveness of the Commerce OIG's oversight. GAO's objectives were to provide information on the Commerce OIG's budgets, staffing, and accomplishments, and to review the OIG's effectiveness in providing audit coverage, addressing hotline complaints, and addressing employee concerns identified in OPM's annual FEVS. For fiscal years 2011 through 2013, GAO identified the budget and staff resources of the Commerce OIG and other cabinet-level OIGs and their reported accomplishments for comparison; reviewed the Commerce OIG's audit coverage of bureaus and offices, management challenges, and high-risk areas; compared the OIG's hotline policies with hotline guidance provided through CIGIE; and tested a random sample of hotline complaints. GAO also reviewed the OIG's efforts to address employee concerns from the 2012 FEVS results. During fiscal years 2011 through 2013, the Department of Commerce (Commerce) Office of Inspector General (OIG) experienced reductions in total budgetary resources from about $47 million to about $41 million, or almost 13 percent, compared to the average reduction of about 6 percent for all other cabinet-level OIGs. The Commerce OIG had a decline of full-time equivalent staff from 171 to 137, or about 20 percent, which was a greater decline than the average decline of about 5 percent for the other OIGs. The Commerce OIG reported approximately $543 million in monetary accomplishments from audits, evaluations, and investigations for the period. Differences in missions and programs of the cabinet-level departments and agencies result in varied opportunities for OIGs to provide monetary accomplishments. While the Commerce OIG's return on each budget dollar was within the range of the lowest and highest returns for all other OIGs for fiscal years 2011 through 2013, its average return of $4.18 over the 3-year period was less than the average return of about $22.64 for the other cabinet-level OIGs. During this period of constrained resources, the Commerce OIG conducted mandatory audits that covered all bureaus and offices and provided performance audit coverage of Commerce's largest bureaus and offices. It also audited areas identified by the OIG as management challenges. However, the OIG did not conduct performance audits of the economy, efficiency, and effectiveness of programs specific to Commerce's smaller bureaus and offices, which had combined fiscal year 2013 total budgetary resources of approximately $2.4 billion, during the 3-year period. In addition, the OIG did not conduct audits over the 3-year period of two areas on GAO's high-risk list relevant to Commerce: (1) managing federal real property and (2) ensuring the effective protection of technologies critical to U.S. national security interests. The OIG's risk-based audit planning contributed to gaps in audit coverage as the office did not provide periodic performance audit coverage of Commerce's smaller programs on a rotational basis and did not fully consider all GAO high-risk areas. The Commerce OIG's hotline policies and procedures were generally consistent with recommended hotline practices of other OIGs provided through the Council of Inspectors General on Integrity and Efficiency (CIGIE). However, through a review of a random sample of OIG hotline cases from fiscal years 2011 through 2013, GAO identified numerous instances where the OIG did not follow one or more of its own hotline policies and procedures regarding the processing, disposition, and timeliness of hotline cases. The OIG could not reasonably ensure that its hotline policies and procedures were consistently followed because of a lack of ongoing monitoring of its internal control activities. The Commerce OIG's Federal Employee Viewpoint Survey (FEVS) results for 2013 and 2014 improved after OIG efforts to address the poor 2012 FEVS results, but responses to specific survey questions remain lower than the government-wide average. The OIG's efforts followed much of the guidance issued by the Office of Personnel Management (OPM) to address FEVS results, but they did not include an action plan with measures of success. GAO recommends that the IG (1) augment the OIG's audit planning to consider a rotation of performance audit coverage among smaller Commerce programs, and applicable GAO high-risk areas; (2) include monitoring of internal controls for the OIG's hotline operations; and (3) develop an action plan with measures of success to address FEVS results. In commenting on a draft of the report, the Commerce IG concurred with GAO's recommendations.
Between July 1985 and June 1999, we reviewed, reported, and testified on the SBIR program many times at the request of the Congress. While our work focused on many different aspects of the program, we generally found that SBIR is achieving its goals to enhance the role of small businesses in federal R&D, stimulate commercialization of research results, and support the participation of small businesses owned by women and/or disadvantaged persons. Participating agencies and companies that we surveyed during the course of our reviews generally rated the program highly. Specific examples of program success that we identified include the following: High-quality research. Throughout the life of the program, awards have been based on technical merit and are generally of good quality. For example, in 1989 we reported that according to agency officials, more than three-quarters of the research conducted with SBIR funding was as good as or better than other agency-funded research. Agency officials also rated the research as more likely than other research they oversaw to result in the invention and commercialization of new products. When we again looked at the quality of research proposals in 1995, we found that while it was too early to make a conclusive judgment about the long-term quality of the research, the quality of proposals remained good, according to agency officials. Widespread competition. The SBIR program successfully attracts many qualified companies, has had a high level of competition, and consistently has had a high number of first-time participants. Specifically, we reported that the number of proposals that agencies received each year had been increasing. In addition, as we reported in 1998, agencies rarely received only a single proposal in response to a solicitation, indicating a sustained level of competition for the awards. We also found that the agencies deemed many more proposals worthy of awards than they were able to fund. For example, the Air Force deemed 1,174 proposals worthy of awards in fiscal year 1993 but funded only 470. Moreover, from fiscal years 1993 through 1997, one third of the companies that received awards were first-time participants. This suggests that the program attracts hundreds of new companies annually. Effective outreach. SBIR agencies consistently reach out to foster participation by women-owned or socially and economically disadvantaged small businesses. For example, we found that DOD’s SBIR managers participated in a number of regional small business conferences and workshops that are specifically designed to foster increased participation by women-owned and socially and economically disadvantaged small businesses. Successful commercialization. SBIR successfully fosters commercialization of research results. At various points in the life of the program we have reported that SBIR has been successful in increasing private sector commercialization of innovations. For example, past GAO and DOD surveys of companies that received SBIR Phase II funding have determined that approximately 35 percent of the projects resulted in the sales of products or services, and approximately 45 percent of the projects received additional developmental funding. We have also reported that agencies were using various techniques to foster commercialization. For example, in an attempt to get those companies with the greatest potential for commercial success to the marketplace sooner, DOD instituted a Fast Track Program, whereby companies that are able to attract outside commitments/capital for their research during phase I are given higher priority in receiving a phase II award. Helping to serve mission needs. SBIR has helped serve agencies’ missions and R&D needs. Agencies differ in the emphasis they place on funding research to support their mission and to support more generalized research. Specifically, we found that DOD links its projects more closely to its mission. In comparison, other agencies emphasize research that will be commercialized by the private sector. Many of the projects DOD funded have specialized military applications while NIH projects have access to the biomedical market in the private sector. Moreover, we found that SBIR promotes research on the critical technologies identified in lists developed by DOD and/or the National Critical Technologies Panel. Generally agencies reviewed these listings of critical technologies to develop research topics or conducted research that fell within one of the two lists. We have also identified areas of weaknesses and made recommendations that, if addressed, could strengthen the program further. Many of our recommendations for program improvement have been either fully or partially addressed by the Congress in various reauthorizations of the program or by the agencies themselves. For example, Duplicate funding. In 1995, we identified duplicate funding for similar, or even identical, research projects by more than one agency. A few companies received funding for the same proposals two, three, and even five times before agencies became aware of the duplication. Contributing factors included the fraudulent evasion of disclosure by companies applying for awards, the lack of a consistent definition for key terms such as “similar research,” and the lack of interagency sharing of data on awards. In response to our recommendations, SBA strengthened the language agencies use in their application packages to clearly warn applicants about the illegality of entering into multiple agreements for essentially the same effort and developed Internet capabilities to access SBIR data for all of the agencies. In SBA’s view, the stronger language regarding the illegality of seeking funding for similar or identical projects addresses the need to develop consistent definitions to help agencies determine when projects are “similar.” Inconsistent interpretations of extramural research budgets. In 1998, we found that while agency officials adhered to SBIR’s program and statutory funding requirements, they used differing interpretations of how to calculate their “extramural research budgets.” As a result some agencies were inappropriately including or excluding some types of expenses. To address our recommendation that SBA provide additional guidance on how participating agencies were to calculate their extramural research budgets, the Congress in 2000 required that the agencies report annually to SBA on the methods used to calculate their extramural research budgets. Geographical concentration of awards. In 1999, in response to congressional concerns about the geographical concentration of SBIR awards, we reported that companies in a small number of states, especially California and Massachusetts, have submitted the most proposals and won the majority of awards. The distribution of awards generally followed the pattern of distribution of non-SBIR expenditures for R&D, venture capital investments, and academic research funds. We reported that some agencies had undertaken efforts to broaden the geographic distribution of awards and that the program implemented by the National Science Foundation had been particularly effective. Although we did not make any recommendations on how to improve the program’s outreach to states receiving fewer awards, in the 2000 reauthorization of the program, Congress established the Federal and State Technology Partnership Program to help strengthen the technological competitiveness of small businesses, especially in those states that receive fewer SBIR grants. Clarification on commercialization and other SBIR goals. Finally, in response to our continuing concern that clarification was needed on the relative emphasis that agencies should give to a company’s commercialization record and SBIR’s other goals when evaluating proposals, in 2000 the Congress required companies applying for a second phase award to include a commercialization plan with their SBIR proposals. This requirement partially addressed our concern. Moreover, in the spring of 2001, SBA initiated efforts to respond to our recommendation to develop standard criteria for measuring commercial and other outcomes of the SBIR program, such as uniform measures of sales and developmental funding, and incorporate these criteria into its Tech-Net database. Specifically, SBA began implementing a reporting system to measure the program’s commercialization success. In fiscal year 2002, SBA further enhanced the reporting system to include commercialization results that would help establish an initial baseline rate of commercialization. In addition, small business firms participating in the SBIR program are required to provide information annually on sales and investments associated with their SBIR projects. One issue that continues to remain somewhat unresolved after almost two decades of program implementation is how to assess the performance of the SBIR program. As the program has matured, the Congress has emphasized the potential for commercialization as an important criterion in awarding funds and the commercialization of a product as a measure of success for the program. However, in 1999, we reported that the program’s other goals also remain important to the agencies. By itself, according to some program managers, limited commercialization may not signal “failure” because a company may have achieved other goals, such as innovation or responsiveness to an agency’s research needs. We identified a variety of reasons why assessing the performance of the SBIR program has remained a challenge. First, because the authorizing legislation and SBA’s policy directives do not define the role of the company’s commercialization record in determining commercial potential and the relative importance of the program’s goals, different approaches have emerged in agencies’ evaluations of proposals. As a result, the relative weight that should be given to the program’s goals when evaluating proposals remains unclear. Innovation and responsiveness to an agency’s needs, for example, may compete with the achievement of commercialization. In the view of many program managers, innovation involves a willingness to undertake R&D with a higher element of risk and a greater chance that it may not lead to a commercial product; responsiveness to an agency’s needs involves R&D that may be aimed at special niches with limited commercial potential. Striking the right balance between achieving commercial sales and encouraging new, unproven technologies is, according to the program managers, one of the key ingredients in the program’s overall success. Second, we found that it has been difficult to find practical ways to define and measure the SBIR program’s goals in order to evaluate proposals. For example, the authorizing legislation lacks a clear definition of “commercialization,” and agencies sometimes differed on its meaning. This absence of a definition makes it more difficult to determine when a frequent winner is “failing” to achieve a sufficient level of commercialization and how to include this information in an agency’s review of the company’s proposal. Similarly, efforts to define and measure technological innovation, which was one of the program’s original goals, have posed a challenge. Although definitions vary, there is widespread agreement that technological innovation is a complex process, particularly in the development of sophisticated modern technologies. Finally, we reported that as the emphasis on commercialization had grown, so had concerns that noncommercial successes may not be adequately recognized. For example, program managers identified various projects that met special military or medical equipment needs but that had limited sales potential. These projects would be helpful in reducing the agency’s expenditures and meeting the mission of the agency but may not be appropriately captured in typical measurements of commercialization. In general, we found that program managers valued both noncommercial and commercial successes and feared that the former might be ignored in emphasizing the latter. To help evaluate the performance of the program, in the 2000 reauthorization of SBIR, Congress required SBA to develop a database that would help the agency collect and maintain in common format necessary program output and outcome information. The database is to include the following information on all phase II awards: (1) revenue from the sale of new products or services resulting from the SBIR funded research, (2) additional investment from any non-SBIR source for further research and development, and (3) any other description of outputs and outcomes of the awards. In addition, the database is to include general information for all applicants not receiving an award including an abstract of the project. In conclusion, Mr. Chairman, our work has shown that, overall, the SBIR program has been successful in meeting its goals and that the Congress and the agencies have implemented actions to strengthen the program over time. However, an assessment of the program’s results remains a challenge because of the lack of clarity on how much emphasis the program should place on commercialization versus other goals. For further information, please contact Anu Mittal at (202) 512-3841 or mittala@gao.gov. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since it was established in 1982, GAO has consistently reported on the success of the Small Business Innovation Research (SBIR) program in benefiting small, innovative companies, strengthening their role in federal research and development (R&D), and helping federal agencies achieve their R&D goals. However, through these reviews GAO has also identified areas where action by participating agencies or the Congress could build on the program's successes and improve its operations. This statement for the record summarizes the program's successes and improvements over time, as well as the continuing challenge of assessing the long term results of the program. Between July 1985 and June 1999, GAO reviewed, reported, and testified on the SBIR program many times at the request of the Congress. While GAO's work focused on many different aspects of the program, it generally found that SBIR is achieving its goals to enhance the role of small businesses in federal R&D, stimulate commercialization of research results, and support the participation of small businesses owned by women and/or disadvantaged persons. Participating agencies and companies that GAO surveyed during the course of its reviews generally rated the program highly. GAO also identified areas of weaknesses and made recommendations that, if addressed, could strengthen the program further. Some of these concerns related to (1) duplicate funding for similar, or even identical, research projects by more than one agency, (2) inconsistent interpretations of extramural research budgets by participating agencies, (3) geographical concentration of awards in a small number of states, and (4) lack of clarification on the emphasis that agencies should give to a company's commercialization record when assessing its proposals. Most of GAO's recommendations for program improvement have been either fully or partially addressed by the Congress in various reauthorizations of the program or by the agencies themselves. One issue that continues to remain somewhat unresolved after almost two decades of program implementation is how to assess the performance of the SBIR program. As the program has matured, the Congress has emphasized the potential for commercialization as an important criterion in awarding funds and the commercialization of a product as a measure of success for the program. However, in 1999, GAO reported that the program's other goals also remain important to the agencies. By itself, according to some program managers, limited commercialization may not signal "failure" because a company may have achieved other goals, such as innovation or responsiveness to an agency's research needs. GAO identified a variety of reasons why assessing the performance of the SBIR program has remained a challenge. First, because the authorizing legislation and the Small Business Administration's (SBA) policy directives do not define the role of the company's commercialization record in determining commercial potential and the relative importance of the program's goals, different approaches have emerged in agencies' evaluations of proposals. Second, GAO found that it has been difficult to find practical ways to define and measure the SBIR program's goals in order to evaluate proposals. For example, the authorizing legislation lacks a clear definition of "commercialization," and agencies sometimes differed on its meaning. Finally, GAO reported that as the emphasis on commercialization had grown, so had concerns that noncommercial successes may not be adequately recognized. For example, program managers identified various projects that met special military or medical equipment needs but that had limited sales potential.
The federal government invests more than $80 billion annually in IT, but many of these investments fail to meet cost and schedule expectations or make significant contributions to mission-related outcomes. We have previously testified that the federal government has spent billions of dollars on failed IT investments, such as the Department of Defense’s (DOD) Expeditionary Combat Support System, which was canceled in December 2012, after spending more than a billion dollars and failing to deploy within 5 years of initially obligating funds; the Department of Homeland Security’s Secure Border Initiative Network program, which was ended in January 2011, after the department obligated more than $1 billion to the program, because it did not meet cost-effectiveness and viability standards; the Department of Veterans Affairs’ (VA) Financial and Logistics Integrated Technology Enterprise program, which was intended to be delivered by 2014 at a total estimated cost of $609 million, but was terminated in October 2011 due to challenges in managing the program; the Farm Service Agency’s Modernize and Innovate the Delivery of Agricultural Systems program, which was to replace aging hardware and software applications that process benefits to farmers, was halted after investing about 10 years and at least $423 million, while only delivering about 20 percent of the functionality that was originally planned. the Office of Personnel Management’s Retirement Systems Modernization program, which was canceled in February 2011, after spending approximately $231 million on the agency’s third attempt to automate the processing of federal employee retirement claims; the National Oceanic and Atmospheric Administration, DOD, and the National Aeronautics and Space Administration’s National Polar- orbiting Operational Environmental Satellite System, which was a tri- agency weather satellite program that the White House Office of Science and Technology stopped in February 2010 after the program spent 16 years and almost $5 billion; and the VA Scheduling Replacement Project, which was terminated in September 2009 after spending an estimated $127 million over 9 years. These and other failed IT projects often suffered from a lack of disciplined and effective management, such as project planning, requirements definition, and program oversight and governance. In many instances, agencies had not consistently applied best practices that are critical to successfully acquiring IT investments. Federal IT projects have also failed due to a lack of oversight and governance. Executive-level governance and oversight across the government has often been ineffective, specifically from chief information officers (CIO). For example, we have reported that not all CIOs had the authority to review and approve the entire agency IT portfolio and that CIOs’ authority was limited. Recognizing the severity of issues related to government-wide management of IT, in December 2014, Congress enacted IT reform legislation, FITARA. The law holds promise for improving agencies’ acquisition of IT and enabling Congress to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. FITARA includes specific requirements related to seven areas. Agency CIO authority enhancements. Agency CIOs are required to (1) approve the IT budget requests of their respective agencies, (2) certify that IT investments are adequately implementing the Office of Management and Budget’s (OMB) incremental development guidance, (3) review and approve contracts for IT, and (4) approve the appointment of other agency employees with the title of CIO. Enhanced transparency and improved risk management. OMB and agencies are to make publicly available detailed information on federal IT investments, and agency CIOs are to categorize their IT investments by risk. Additionally, in the case of major IT investments rated as high risk for 4 consecutive quarters, the law requires that the agency CIO and the investment’s program manager conduct a review aimed at identifying and addressing the causes of the risk. Portfolio review. Agencies are to annually review IT investment portfolios in order to, among other things, increase efficiency and effectiveness, and identify potential waste and duplication. In developing the associated process, the law requires OMB to develop standardized performance metrics, to include cost savings, and to submit quarterly reports to Congress on cost savings. Federal data center consolidation initiative (FDCCI). Agencies are required to provide OMB with a data center inventory, a strategy for consolidating and optimizing the data centers (to include planned cost savings), and quarterly updates on progress made. The law also requires OMB to develop a goal of how much is to be saved through this initiative, and provide annual reports on cost savings achieved. Expansion of training and use of IT cadres. Agencies are to update their acquisition human capital plans to address supporting the timely and effective acquisition of IT. In doing so, the law calls for agencies to consider, among other things, establishing IT acquisition cadres or developing agreements with other agencies that have such cadres. Maximizing the benefit of the federal strategic sourcing initiative. Federal agencies are required to compare their purchases of services and supplies to what is offered under the Federal Strategic Sourcing initiative. OMB is also required to issue related regulations. Government-wide software purchasing program. The General Services Administration is to develop a strategic sourcing initiative to enhance government-wide acquisition and management of software. In doing so, the law requires that, to the maximum extent practicable, the General Services Administration should allow for the purchase of a software license agreement that is available for use by all Executive Branch agencies as a single user. In addition, in June 2015, OMB released guidance describing how agencies are to implement the law. OMB’s guidance states that it is intended to, among other things: assist agencies in aligning their IT resources to statutory requirements; establish government-wide IT management controls that will meet the law’s requirements, while providing agencies with flexibility to adapt to unique agency processes and requirements; clarify the CIO’s role and strengthen the relationship between agency CIOs and bureau CIOs; and strengthen CIO accountability for IT cost, schedule, performance, and security. In this regard, the guidance reiterates OMB’s existing guidance on PortfolioStat, the IT Dashboard, and the federal data center consolidation initiative, and expands its existing guidance on TechStat sessions. The guidance includes several actions agencies are to take to establish a basic set of roles and responsibilities (referred to as the “common baseline”) for CIOs and other senior agency officials that are needed to implement the authorities described in the law. For example, agencies were required to conduct a self-assessment and submit a plan describing the changes they will make to ensure that common baseline responsibilities are implemented. Agencies were to submit their plans to OMB’s Office of E-Government and Information Technology by August 15, 2015, and make portions of the plans publicly available on agency websites no later than 30 days after OMB approval. As of October 30, 2015, none of the 24 Chief Financial Officers Act agencies had made their plans publicly available. The guidance also noted that OMB will help support agency implementation of the common baseline by, for example, requiring the Federal CIO Council to, on quarterly basis, discuss topics related to the implementation of the common baseline and to assist agencies by sharing examples of agency governance processes and IT policies. Further, by June 30, 2015, the President’s Management Council was to select three members from the council to provide an update on government-wide implementation of FITARA on a quarterly basis through September 2016. However, as of October 28, 2015, OMB officials stated that the President’s Management Council had not yet selected members to provide these updates. In addition, OMB recently issued a memorandum regarding commodity IT acquisitions and noted that agencies buy and manage their IT in a fragmented and inefficient manner which conflicts with the goals of FITARA. Among other things, the memorandum directed agencies to standardize laptop and desktop configurations for common requirements and reduce the number of contracts for laptops and desktops by consolidating purchasing. The memorandum notes that OMB intends for agencies to implement standard configurations over time by using approved contracts, with a government-wide goal of 75 percent of agencies using approved contracts by fiscal year 2018. The memorandum requires agencies to develop transition plans to achieve this goal and submit them to OMB by February 28, 2016. Our government-wide high-risk area Improving the Management of IT Acquisitions and Operations highlights critical IT initiatives, four of which align with provisions in FITARA: (1) an emphasis on incremental development, (2) a key transparency initiative, (3) efforts to consolidate data centers, and (4) efforts to streamline agencies’ portfolios of IT investments. Our high-risk report notes that implementation of these initiatives had been inconsistent, and more work remained to demonstrate progress in achieving IT acquisition outcomes. Implementing the provisions from the law, along with our outstanding recommendations, will be necessary for agencies to demonstrate progress in addressing this high-risk area. OMB has emphasized the need to deliver investments in smaller parts, or increments, in order to reduce investment risk, deliver capabilities more quickly, and facilitate the adoption of emerging technologies. In 2010, it called for agencies’ major investments to deliver functionality every 12 months and, since 2012, every 6 months. However, we recently reported that less than half of selected investments at five major agencies planned to deliver capabilities in 12-month cycles. Accordingly, we recommended that OMB develop and issue clearer guidance on incremental development and that selected agencies update and implement their associated policies. Most agencies agreed with our recommendations or had no comment. In January 2010, the Federal CIO began leading TechStat sessions— face-to-face meetings to terminate or turn around IT investments that are failing or are not producing results. These meetings involve OMB and agency leadership and are intended to increase accountability and improve performance. OMB reported that federal agencies achieved over $3 billion in cost savings or avoidances as a result of these sessions in 2010. Subsequently, OMB empowered agency CIOs to hold their own TechStat sessions within their respective agencies. We have since reported that OMB and selected agencies held multiple TechStats, but additional OMB oversight was needed to ensure that these meetings were having the appropriate impact on underperforming projects and that resulting cost savings were valid. We concluded that until OMB and agencies develop plans to address these investments, the investments would likely remain at risk. Among other things, we recommended that OMB require agencies to address high-risk investments. OMB generally agreed with this recommendation. However, as of October 28, 2015, OMB has only conducted one TechStat review in the last 2 years. In particular, between March 2013 and October 2015, OMB held one TechStat on the Department of State’s legacy consular systems investment in July 2015. Moreover, OMB has not listed any savings from TechStats in any of its required quarterly reporting to Congress since June 2012. To help the government achieve transparency while managing legacy investments, in June 2009, OMB established a public website (referred to as the IT Dashboard) that provides detailed information on major IT investments at 27 federal agencies, including ratings of their performance against cost and schedule targets. Among other things, agencies are to submit ratings from their CIOs, which, according to OMB’s instructions, should reflect the level of risk facing an investment relative to that investment’s ability to accomplish its goals. As of August 2015, according to the IT Dashboard, 163 of the federal government’s 738 major IT investments—totaling $9.8 billion—were in need of management attention (rated “yellow” to indicate the need for attention or “red” to indicate significant concerns). (See fig. 1.) Over the past several years, we have made over 20 recommendations to help improve the accuracy and reliability of the information on the IT Dashboard and to increase its availability. Most agencies agreed with our recommendations or had no comment. In addition to spending money on new IT development, agencies also plan to spend a significant amount of their fiscal year 2016 IT budgets on the operations and maintenance (O&M) of legacy (i.e., steady-state) systems. From fiscal year 2010 to fiscal year 2016, this amount has increased, while the amount invested in developing new systems has decreased by about $7.1 billion. (See figure 2.) This raises concerns about agencies’ ability to replace systems that are no longer cost- effective or that fail to meet user needs. Of the more than $79 billion budgeted for federal IT in fiscal year 2016, 26 federal agencies plan to spend about $60 billion, more than three- quarters of the total budgeted, on the O&M of legacy investments. Figure 3 provides a visual summary of the relative cost of major and nonmajor investments, both in development and O&M. Given the size and magnitude of these investments, it is important that agencies effectively manage the O&M of existing investments to ensure that they (1) continue to meet agency needs, (2) deliver value, and (3) do not unnecessarily duplicate or overlap with other investments. To accomplish this, agencies are required by OMB to perform annual operational analyses of these investments, which are intended to serve as periodic examination of an investment’s performance against, among other things, established cost, schedule, and performance goals. However, we have reported that agencies were not consistently performing such analyses and that billions of dollars in O&M investments had not undergone needed analyses. Specifically, as detailed in our November 2013 report, only 1 of the government’s 10 largest O&M investments underwent an OMB-required operational analysis. We recommended that operational analyses be completed on the remaining 9 investments. Most agencies generally agreed with our recommendations. To improve the efficiency, performance, and environmental footprint of federal data center activities, OMB established the federal data center consolidation initiative in February 2010. In a series of reports, we found that, while data center consolidation could potentially save the federal government billions of dollars, weaknesses existed in the execution and oversight of the initiative. Most recently, we reported that, as of May 2014, agencies collectively reported that they had a total of 9,658 data centers; as of May 2015, they had closed 1,684 data centers and were planning to close an additional 2,431—for a total of 4,115—by the end of September 2015. We also noted that between fiscal years 2011 and 2017, agencies reported planning a total of about $5.3 billion in cost savings and avoidances due to the consolidation of federal data centers. In correspondence subsequent to the publication of our report, DOD’s Office of the CIO identified an additional $2.1 billion in savings to be realized beyond fiscal year 2017, which increased the total savings across the federal government to about $7.4 billion. Further, since our May 2014 report we received additional information from other agencies about their actual 2014 cost savings and revised plans for future savings. This information is shown in table 1, which provides a summary of agencies’ total data center cost savings and cost avoidances between fiscal years 2011 and 2017, as well as DOD cost savings and cost avoidances to be realized beyond 2017. However, in our September 2014 report, we noted that planned savings may be understated because of difficulties agencies encountered when calculating savings and communicating their estimates to OMB. We made recommendations to ensure the initiative improves efficiency and achieves cost savings. Most agencies agreed with our recommendations or did not comment. To better manage existing IT systems, OMB launched the PortfolioStat initiative, which requires agencies to conduct an annual, agency-wide IT portfolio review to, among other things, reduce commodity IT spending and demonstrate how their IT investments align with the agency’s mission and business functions. In November 2013, we reported that agencies continued to identify duplicative spending as part of PortfolioStat and that this initiative had the potential to save at least $5.8 billion through fiscal year 2015; however, weaknesses existed in agencies’ implementation of the initiative, such as limitations in the CIOs’ authority. We made more than 60 recommendations to improve OMB’s and agencies’ implementation of PortfolioStat. OMB partially agreed with our recommendations, and responses from 21 of the agencies varied, with some agreeing and others not. In April 2015, we reported that agencies decreased their planned PortfolioStat savings to approximately $2 billion—a 68 percent reduction from the amount they reported to us in 2013. Additionally, although agencies also reported having achieved approximately $1.1 billion in savings, inconsistencies in OMB’s and agencies’ reporting made it difficult to reliably measure progress in achieving savings. Among other things, we made recommendations to OMB aimed at improving the reporting of achieved savings, with which it agreed. We have also recently reported on two additional key areas of agency’s IT spending portfolio: software licensing and mobile devices. Regarding software licensing, we recently reported that better management was needed to achieve significant savings government- wide. In particular, 22 of the 24 major agencies we reviewed did not have comprehensive license policies, and only 2 had comprehensive license inventories. We recommended that OMB issue needed guidance to agencies and made more than 130 recommendations to the agencies to improve their policies and practices for managing software licenses. OMB disagreed with the need for guidance. However, we believe that without such guidance, agencies will likely continue to lack the visibility into what needs to be managed. Most agencies generally agreed with the recommendations or had no comments. We have also reported that most of the 15 agencies in our mobile devices review did not have an inventory of mobile devices and associated services, and only 1 of the 15 agencies we reviewed had documented procedures for monitoring spending. Accordingly, we recommended that the agencies take actions to improve their inventories and control processes and that OMB measure and report progress in achieving cost savings. OMB and 14 of the agencies generally agreed with the recommendations or had no comment. The Department of Defense partially agreed, and we maintained that actions were needed. In our February 2015 high-risk report, we identified actions that OMB and the agencies need to take to make progress in this area. These include implementing the recently enacted statutory requirements promoting IT acquisition reform, as well as implementing our previous recommendations, such as updating the public version of the IT Dashboard throughout the year. As noted in that report, we have made multiple recommendations to improve agencies’ management of their IT acquisitions, many of which have been discussed in this statement. In the last 6 years we made approximately 800 recommendations to multiple agencies. As of October 2015, about 32 percent of these recommendations had been implemented. Also in our high-risk report, we stated that OMB and agencies will need to demonstrate measurable government-wide progress in the following key areas: implement at least 80 percent of GAO’s recommendations related to the management of IT acquisitions and operations within 4 years. ensure that a minimum of 80 percent of the government’s major acquisitions deliver functionality every 12 months. achieve no less than 80 percent of the planned PortfolioStat savings and 80 percent of the planned savings planned for data center consolidation. In conclusion, with the recent passage of IT reform legislation, the federal government has an opportunity to improve the transparency and management of IT acquisition and operations, and strengthen the authority of CIOs to provide needed direction and oversight. Further, by identifying the management of IT acquisitions and operations as a new government-wide high-risk area we are bringing necessary attention to several critical IT initiatives in need of additional congressional oversight. OMB and federal agencies should expeditiously implement the requirements of the legislation and continue to implement our previous recommendations. To help ensure that these improvements are achieved, continued congressional oversight of OMB’s and agencies’ implementation efforts is essential. Chairmen Meadows and Hurd, Ranking Members Connolly and Kelly, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For additional information about this high-risk area, contact David A. Powner at (202) 512-9286 or pownerd@gao.gov, Carol Cha at (202) 512- 4456 or chac@gao.gov, or Valerie Melvin at (202) 512-6304 or melvinv@gao.gov. Individuals who made key contributions to this testimony are Kevin Walsh (Assistant Director), Chris Businsky, Rebecca Eyler, Kaelin Kuhn, and Jessica Waselkow. Telecommunications: Agencies Need Better Controls to Achieve Significant Savings on Mobile Devices and Services. GAO-15-431. May 21, 2015. Information Technology: Additional OMB and Agency Actions Needed to Ensure Portfolio Savings Are Realized and Effectively Tracked. GAO-15-296. April 16, 2015. Federal Chief Information Officers: Reporting to OMB Can Be Improved by Further Streamlining and Better Focusing on Priorities. GAO-15-106. April 2, 2015. High-Risk Series: An Update. GAO-15-290. February 11, 2015. Data Center Consolidation: Reporting Can be Improved to Reflect Substantial Planned Savings. GAO-14-713. September 25, 2014. Federal Software Licenses: Better Management Needed to Achieve Significant Savings Government-Wide. GAO-14-413. May 22, 2014. Information Technology: Agencies Need to Establish and Implement Incremental Development Policies. GAO-14-361. May 1, 2014. IT Dashboard: Agencies Are Managing Investment Risk, but Related Ratings Need to Be More Accurate and Available. GAO-14-64. December 12, 2013. Information Technology: Agencies Need to Strengthen Oversight of Multibillion Dollar Investments in Operations and Maintenance. GAO-14-66. November 6, 2013. Information Technology: Additional OMB and Agency Actions Are Needed to Achieve Portfolio Savings. GAO-14-65. November 6, 2013. Information Technology: Additional Executive Review Sessions Needed to Address Troubled Projects. GAO-13-524. June 13, 2013. Information Technology: Agencies Need to Strengthen Oversight of Billions of Dollars in Operations and Maintenance Investments. GAO-13-87. October 16, 2012. IT Dashboard: Accuracy Has Improved, and Additional Efforts Are Under Way to Better Inform Decision Making. GAO-12-210. November 7, 2011. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government invests more than $80 billion annually in IT. However, these investments frequently fail, incur cost overruns and schedule slippages, or contribute little to mission-related outcomes. As GAO has previously reported, this underperformance of federal IT projects can be traced to a lack of disciplined and effective management and inadequate executive-level oversight. Accordingly, in December 2014, IT reform legislation was enacted, aimed at improving agencies' acquisition of IT. Further, earlier this year GAO added improving the management of IT acquisitions and operations to its high-risk list—a list of agencies and program areas that are high risk due to their vulnerabilities to fraud, waste, abuse, and mismanagement, or are most in need of transformation. This statement provides information on FITARA and GAO's designation of IT acquisitions and operations as a high-risk area. In preparing this statement, GAO relied on its previously published work in these areas. The law commonly known as the Federal Information Technology Acquisition Reform Act (FITARA) was enacted in December 2014 and aims to improve federal information technology (IT) acquisition and operations. The law includes specific requirements related to seven areas. For example, it addresses Agency Chief Information Officer (CIO) authority enhancements . Among other things, agency CIOs are required to approve the IT budget requests of their respective agencies and certify that IT investments are adequately implementing the Office of Management and Budget's (OMB) incremental development guidance. Enhanced transparency and improved risk management . OMB and agencies are to make publicly available detailed information on federal IT investments, and agency CIOs are to categorize IT investments by risk. Additionally, if major IT investments are rated as high risk for 4 consecutive quarters, the agencies are to conduct a review of the investment. Portfolio review. Agencies are to annually review IT investment portfolios in order to, among other things, increase efficiency and effectiveness, and identify potential waste and duplication. OMB is required to develop standardized performance metrics, to include cost savings, and to submit quarterly reports to Congress on cost savings. Federal data center consolidation initiative. Agencies are required to provide OMB with a data center inventory, a strategy for consolidating and optimizing the data centers (to include planned cost savings), and quarterly updates on progress made. OMB is required to develop a goal of how much is to be saved through this initiative, and report on progress annually. Maximizing the benefit of the federal strategic sourcing initiative . Federal agencies are required to compare their purchases of services and supplies to what is offered under the Federal Strategic Sourcing initiative. OMB has released guidance for agencies to implement provisions of FITARA, which includes actions agencies are to take regarding responsibilities for CIOs. The guidance also reiterates OMB's existing guidance on IT portfolio management, a key transparency website, and the federal data center consolidation initiative; and expands its existing guidance on reviews of at-risk investments. Agencies were to conduct a self-assessment and submit a plan to OMB by August 2015 describing the changes they will make to ensure that responsibilities are implemented. Further, portions of these plans are required to be made publicly available 30 days after OMB's approval; as of October 30, 2015, none of the 24 Chief Financial Officers Act agencies had done so. Further, FITARA's provisions are similar to areas covered by GAO's high-risk area to improve the management of IT acquisitions and operations. For example, GAO has noted that improvements are needed in federal efforts to enhance transparency, consolidate data centers, and streamline agencies' IT investment portfolios. To demonstrate progress in addressing this high-risk area, agencies will need to implement the legislation's provisions and GAO's outstanding recommendations. Over the last 6 years, GAO made about 800 recommendations to OMB and agencies to improve acquisition and operations of IT. As of October 2015, about 32 percent of these had been implemented. It will be critical for agencies to implement the remaining GAO recommendations and the requirements of FITARA to achieve improvements. , Carol Cha at (202) 512-4456 or chac@gao.gov , or Valerie Melvin at (202) 512-6304 or melvinv@gao.gov .
The 2008 Leadership Act called on the U.S. Global AIDS Coordinator to develop a 5-year strategy to combat global HIV/AIDS, including a plan to achieve a number of prevention, treatment, and care program goals. The 5-year PEPFAR strategy, which OGAC released in December 2009, specifies multiyear program goals and outlines multiyear targets including those listed in the Leadership Act. The 2008 Leadership Act, which amends the 2003 Leadership Act, requires that OGAC submit an annual report to Congress, including an assessment of progress toward the achievement of annual goals. If annual goals are not being met, the 2008 Leadership Act states that the report should identify the reasons for such failure. GPRA and our prior work identify practices related to performance planning and reporting. GPRA calls for the use of several performance management practices intended to improve federal program effectiveness, accountability, and service delivery and to enhance congressional decision making by requiring federal agencies to provide more objective information on program performance. In addition, our prior work suggests the use of a practice to bolster program performance reporting. These practices include the following, among others:  Performance planning. GPRA calls for preparation of public annual performance plans that articulate goals for the upcoming fiscal year. These plans should link annual program goals to program activities, include indicators that will be used to measure performance, provide information on the operational processes and resources required to meet the performance goals, and identify the procedures that will be used to verify and validate performance information.  Performance reporting. GPRA calls for annual performance reports reviewing the success of achieving the performance goals of the fiscal year. The reports are to describe and review results compared with performance goals, provide explanations for any unmet goals and actions needed to address them, and include summaries of completed program evaluations. In addition, our prior work found that explaining any limitations of performance information can provide context for understanding and assessing program performance and the costs and challenges faced in gathering, processing, and analyzing data. This practice can help identify the actions needed to address any inadequacies in the completeness and reliability of performance data and thereby improve program performance reporting. In August 2009, OGAC issued its Next Generation Indicators Reference Guide, providing an updated list of indicators for establishing targets and reporting on results of PEPFAR prevention, care, treatment, and health systems strengthening programs. The guidance classifies 32 indicators as essential and reported—that is, indicators that PEPFAR country or regional teams must use in submitting data on program results to OGAC. (See app. III for a list of the 32 essential reported PEPFAR indicators.) The guidance advises PEPFAR country and regional teams to require PEPFAR implementing partners to submit data for an additional set of indicators, if applicable, but does not require country and regional teams to submit these data to OGAC. The guidance also provides a list of recommended indicators for implementing partners and PEPFAR program managers who need additional information for program management. The guidance states that PEPFAR interagency country or regional teams determine how to collect data from PEPFAR implementing partners and relevant national systems, as well as how to aggregate, store, and use the PEPFAR program monitoring indicators in country. OGAC, USAID, and CDC officials share responsibility for PEPFAR planning and reporting activities—including developing and approving PEPFAR operational plans and reports—and conduct agency-specific planning and reporting procedures. The procedures support agencies’ internal program management and provide data for external reporting on PEPFAR results. OGAC’s Strategic Information (SI) office guides and coordinates PEPFAR performance planning and reporting for countries and regions receiving U.S. HIV/AIDS assistance. SI advisors—as of July 2011, 20 CDC and USAID officials—provide technical support and assistance to country and regional teams for developing annual operational plans for PEPFAR programs. In helping to develop the country-level and regional operational plans, when requested, SI advisors work with the country and regional teams to describe partner-level PEPFAR activities during the preceding fiscal year and establish country-level and regional targets for the coming year. When OGAC receives the operational plans (typically in October), SI advisors review the performance targets. After the plans are approved by the U.S. Global AIDS Coordinator, OGAC aggregates budget, program activity, and planned performance information in the plans to create an annual PEPFAR operational plan to be submitted to Congress. When requested, OGAC’s SI office also guides and assists PEPFAR teams in preparing and submitting data on program results to the U.S. Global AIDS Coordinator. SI advisors work with PEPFAR country and regional teams to submit data on program results semi-annually (typically in May) and annually (typically in November). The semi-annual data consist of targets and results for a subset of eight PEPFAR essential indicators; the annual data consist of targets and results for all 32 essential reported PEPFAR indicators. SI advisors review the submitted data, and SI office staff further review and reconcile treatment data with data from the Global Fund, UNAIDS, and the World Health Organization. Once the data are confirmed, OGAC considers them to be PEPFAR’s final results for the year. These data, which OGAC maintains internally, are intended to support PEPFAR program monitoring, midcourse correction, and planning for subsequent fiscal years. PEPFAR program results data also supply information for public reports and other documents, including OGAC’s annual report to Congress on PEPFAR performance, typically published in February, as well as a World AIDS Day (December 1) press release on PEPFAR results. USAID’s Office of HIV/AIDS, in Washington, D.C., and USAID officials in regional and country missions share responsibility for global HIV/AIDS performance planning and reporting, including oversight of USAID implementing partners. The Office of HIV/AIDS comprises four divisions, two of which—the Implementation Support Division and the Strategic Planning, Evaluation, and Reporting Division—provide assistance to the agency and field missions in managing programs and incorporating programmatic best practices. USAID uses PEPFAR program results data for its annual performance plans and reports. USAID also conducts foreign assistance performance planning and reporting jointly with State’s Office of the Director of U.S. Foreign Assistance, using State’s and USAID’s Foreign Assistance Framework. In addition to producing multiyear country assistance strategies and mission strategic plans, USAID country or regional missions complete annual operational plans and annual performance plans and reports for monitoring, evaluating, and reporting progress in achieving the agency’s foreign assistance objectives. USAID guidance further specifies required elements of mission performance management plans, including indicators, baseline values and targets, data sources, any known data limitations, and data quality assessment procedures. State’s and USAID’s master list of standard indicators specifies 46 HIV/AIDS- related indicators for setting targets and reporting results. According to USAID officials, the HIV/AIDS-related indicator descriptions are aligned with those for PEPFAR. Through its audits of USAID’s global HIV/AIDS program activities, from fiscal year 2008 to 2011, USAID’s OIG has made recommendations related to performance planning and reporting. We identified 130 USAID OIG recommendations regarding performance monitoring of USAID- administered PEPFAR activities for fiscal years 2008 to 2011, which we categorized using 12 components of HIV/AIDS program monitoring and evaluations systems, as defined by UNAIDS. Of these recommendations, 94 recommendations, or 72 percent, are related to routine program monitoring or data quality—specifically, 39 percent are related to routine program monitoring (producing timely and high-quality program monitoring data); 11 percent are related to supportive supervision and data auditing (monitoring data quality periodically and addressing any obstacles to producing high-quality data); and 22 percent are related to both routine program monitoring and supportive supervision and data auditing. (See fig. 1.) For example, the OIG reported in 2009 that the USAID mission in one country did not sufficiently verify and validate implementing partner performance data and, as a result, recommended that the mission establish procedures, including site visits, for validating these data. (We categorized this recommendation as relating to both routine program monitoring and supportive supervision and data auditing.) In addition, we found that a number of recommendations related to human capacity for monitoring and evaluation, often in combination with recommendations for improving program monitoring. For example, a 2010 audit of another USAID country mission’s PEPFAR program found that inadequate training of implementing partner staff resulted in weak data collection methods and reporting of inaccurate performance data. The OIG recommended that the mission develop a training plan for implementing partner staff in charge of data collection and reporting. Routine program monitoring (51 recommendations) Routine program monitoring and supportive supervision and data auditing (29 recommendations) Supportive supervision and data auditing (14 recommendations) Supportive supervision and data auditing and human capacity (10 recommendations) Human capacity (5 recommendations) Human capacity and routine program monitoring (6 recommendations) Partnerships for monitoring and evaluation (3 recommendations) Routine program monitoring and partnerships for monitoring and evaluation (2 recommendations) Organizational structures and routine program monitoring (2 recommendations) According to data provided by USAID, as of June 2011, the agency had implemented about two-thirds (65 percent) of USAID OIG report recommendations related to program performance monitoring and evaluation; about a third (35 percent) of the remaining recommendations are due for final action by December 2011. (See fig. 2.) CDC’s Division of Global HIV/AIDS (DGHA), in Atlanta, Georgia, is responsible, along with CDC officials in 41 overseas offices, for global HIV/AIDS programs in more than 75 countries. DGHA comprises a regional and country management office and eight headquarters-based technical and operational branches, including epidemiology and strategic information; health economics, systems, and integration; and country operations. These offices and branches manage and provide technical assistance and support to CDC country teams and partner governments, coordinate DGHA involvement in PEPFAR interagency activities and partnerships with international organizations, and support regional and country offices with implementing partner selection and performance monitoring. CDC uses PEPFAR program results data for its annual performance plans and reports. In addition, in 2010, CDC instituted quarterly program reviews for all CDC divisions, and DGHA underwent its first quarterly program review in November 2010. For these CDC management reviews, DGHA selected 16 1-year and 14 4-year goals under four priority strategies: strengthen public health systems globally; scale up combination prevention programs and treat HIV globally in a cost- effective manner; transition HIV/AIDS treatment programs to host-country governments; and support the Global Health Initiative. DGHA reports quarterly to the Office of the Associate Director for Program on eight PEPFAR indicators, representing 31 PEPFAR countries and three regions. According to CDC officials, the quarterly program review is intended to inform CDC’s annual performance plan and report. Beginning in February 2011, DGHA officials initiated a series of in-country reviews—called country management and support visits—of CDC country office management of global HIV/AIDS programs. DGHA officials completed eight visits by the end of June 2011 and planned to complete up to 17 additional visits over the next several months, with up to 34 country visits being completed by the end of fiscal year 2012. DGHA plans to make summaries of the country visits available to the public. In addition, CDC develops annual interagency programmatic planning and monitoring documents called country assistance plans. In February 2010, CDC technical and budget officials and senior management reviewed country assistance plans for seven countries: Afghanistan, Brazil, Laos, Mali, Papua New Guinea, Senegal, and Sierra Leone. These plans provide information on planned activities and country targets and results, among other things. CDC’s country assistance plan guidance recommends that CDC country offices refer to PEPFAR indicators in the plans, as appropriate, when reporting results. During a pilot project for assessing the quality of treatment program data, CDC found that data quality varied across CDC-funded treatment sites. CDC examined the reliability of the numbers of patients reported as currently on treatment at 31 CDC-funded PEPFAR treatment sites in Mozambique, Tanzania, and Côte d’Ivoire. CDC found that counting actual patient visit or drug pickup data at the 31 sites yielded a lower total than the method used by some implementing partners (39,577 patients versus 48,796 patients, respectively). The implementing partners sometimes summed the number of people who ever started treatment and subtracted those known to have left the program, resulting in misclassification of patients’ treatment status and inflation of reported results. Based on these assessments, CDC recommended (1) refining definitions of indicators and acceptable methods for deriving the information; (2) developing a data quality assessment program with a standardized protocol for evaluating data; (3) completing the treatment data quality assessment at all PEPFAR-supported sites; and (4) sharing the assessments’ findings with all PEPFAR country teams, implementing partners, and ministries of health. OGAC, USAID, and CDC have issued several performance management planning and reporting documents in response to the requirements included in the 2008 Leadership Act and practices specified in GPRA. (See app. IV for a list of targets and results reported by OGAC, USAID, and CDC.)  OGAC. OGAC has issued annual PEPFAR operational plans for fiscal years 2009 and 2010. According to OGAC officials, the PEPFAR operational plan—which aggregates information from country and regional operational plans—serves as its annual performance plan. OGAC also issues an annual PEPFAR performance report to Congress. OGAC’s most recent annual report to Congress, for fiscal year 2010, includes a series of tables showing programwide PEPFAR results for prevention, treatment, and care indicators; the annual report for fiscal year 2009 also includes results for health systems strengthening indicators. In most cases, these results are also displayed by country or region.  USAID. In March 2011, USAID issued, jointly with State, the “Foreign Operations FY 2010 Performance Report, FY 2012 Performance Plan” (State-USAID APR/APP) as part of State’s and USAID’s congressional budget justification for fiscal year 2012. The document provides, among other things, information on 2010 targets and results for two PEPFAR indicators: (1) number of individuals receiving antiretroviral treatment, and (2) number of individuals infected or affected by HIV/AIDS, including orphans and vulnerable children, who were receiving care and support services. The State-USAID APR/APP cites PEPFAR’s 5-year target for number of HIV infections averted and provides an annual target for 2010 but does not report on annual results.  CDC. CDC’s “Fiscal Year 2012 Justification of Estimates for Appropriation Committees” and “FY 2012 Online Performance Appendix” constitute its performance report and performance plan for fiscal years 2010 and 2012, respectively. In these documents, CDC reports on 2010 targets and results using four PEPFAR indicators: (1) number of individuals receiving antiretroviral treatment; (2) number of individuals infected and affected by HIV/AIDS, including orphans and vulnerable children, receiving care and support services; (3) number of pregnant women receiving HIV counseling and testing; and (4) number of HIV-positive pregnant women receiving antiretroviral prophylaxis. OGAC’s most recent annual performance documents do not provide information related to annual targets, as required by the 2008 Leadership Act and consistent with GPRA. (See fig. 3.) PEPFAR country and regional operational plans contain country-level and regional targets for the coming year and data showing program targets and results, measured by PEPFAR indicators. However, the annual PEPFAR operational plans and reports that OGAC submitted to Congress for fiscal years 2009 and 2010 do not contain any information on annual targets. Moreover, OGAC’s annual reports to Congress for fiscal years 2009 and 2010 do not compare annual results with annual targets. According to the 2008 Leadership Act, these reports are to include an assessment of progress toward the achievement of annual goals and, if annual goals are not being met, the reasons for such failures. In addition, GPRA calls for annual performance reports to compare results with previously established targets. State-USAID’s and CDC’s annual performance documents present some information on PEPFAR targets and results (see fig. 3). The State-USAID APR/APP cites two targets for treatment and care programs for fiscal year 2010. CDC’s fiscal year 2010 performance report and fiscal year 2012 performance plan cite four fiscal year targets—two for prevention, and one each for treatment and care programs. Both agencies’ performance documents compare PEPFAR 2010 results with targets set for the same year and rate PEPFAR’s performance against those targets. For example, the documents report that PEPFAR exceeded its 2010 target for number of individuals on antiretroviral treatment but did not meet its target for number of individuals receiving care and support services. The State- USAID APR/APP states that the reason for the shortfall is being evaluated, while CDC’s fiscal year 2010 performance report and fiscal year 2012 performance plan states that trend analysis shows constant progress in expanding care with significant increases each year. In addition, CDC reports that PEPFAR exceeded its 2010 targets for number of pregnant women receiving counseling and testing and number of pregnant women receiving antiretrovirals. For the 2010 PEPFAR prevention target reported in the State-USAID APR/APP, the document states that data are not available for the indicator. Further, the document states that, because an infection averted is a nonevent, this estimate needs to be modeled based on surveillance reports and that the estimate of impact through 2010 is expected to be available in 2012 at the earliest. OGAC has not publicly provided, consistent with GPRA practices, information on efforts to verify and validate reported performance data. However, State-USAID’s and CDC’s annual performance documents cite OGAC efforts to verify and validate some PEPFAR performance data.  OGAC. Although OGAC internal guidance summarizes PEPFAR country teams’ and OGAC’s roles in verifying and validating reported data, OGAC’s two most recent PEPFAR operational plans and annual reports to Congress, covering fiscal years 2009 and 2010, contain no information on these efforts.  USAID. The State-USAID APR/APP states that the results data reported for the two PEPFAR indicators are corroborated with data from other sources. The document also notes that OGAC expects to report the estimated number of HIV infections averted using a U.S. Census Bureau model.  CDC. CDC’s fiscal year 2010 performance report and fiscal year 2012 performance plan sources the data it reports to PEPFAR annual program results data, noting that OGAC manages and validates results data at the headquarters level. Moreover, even with the data reliability weaknesses noted by USAID OIG reviews and CDC’s treatment program data quality pilot project, OGAC’s, USAID’s, and CDC’s performance reports do not contain information on these weaknesses or on steps taken to address the weaknesses. Credible performance information is essential for accurately assessing agencies’ progress toward the achievement of their goals and, in cases where goals are not met, identifying opportunities for improvement or whether goals need to be adjusted. As we have reported previously, without such information, and absent strategies to address identified limitations, Congress and other decision makers cannot assess the validity and reliability of reported performance information. PEPFAR’s commitment to transparent reporting of program results, clearly stated in its 5-year strategy, is also reflected in OGAC planning, reporting, and indicator guidance to PEPFAR country teams. In addition, OGAC, USAID, and CDC procedures for program performance planning and reporting are intended to help a broad range of stakeholders— including PEPFAR implementing agency headquarters and country team officials, partner country governments, and Congress—manage and oversee PEPFAR programs and demonstrate the U.S. government’s contribution to the global fight against HIV/AIDS. OGAC, USAID, and CDC performance plans and reports serve as key sources of public information on their efforts to monitor PEPFAR program performance. However, OGAC can improve its annual performance planning and reporting. First, by discussing annual results alongside established targets in its annual report to Congress, OGAC would provide important context for understanding PEPFAR’s annual achievements and areas needing attention. Second, by providing information on its own and implementing agencies’ efforts to ensure the quality of their performance data, OGAC would give decision makers greater insight into the quality and value of the reported performance information. In accordance with requirements and practices set forth in the 2008 Leadership Act and GPRA, and to improve transparency and accountability, we recommend that the Secretary of State direct the U.S. Global AIDS Coordinator to modify the annual report to Congress on PEPFAR performance in the following two ways: (1) include comparisons of annual PEPFAR results with previously established annual targets and (2) include information on efforts to verify and validate PEPFAR performance data and address data limitations. We provided a draft of this report to State, USAID, and HHS for comment. Responding jointly with HHS and USAID, OGAC provided written comments (see app. V for a copy of these comments). OGAC agreed with our second recommendation to include in PEPFAR’s annual report to Congress information on efforts to verify and validate PEPFAR performance data and address data limitations, and stated that PEPFAR will provide this information in future annual reports and on its Web site. Citing the need to consider various related issues and their consequences in consultation with Congress and other stakeholders, OGAC partially agreed with our first recommendation to include in PEPFAR’s annual report to Congress comparisons of annual PEPFAR results with previously established targets, consistent with a 2008 Leadership Act requirement and a key GPRA practice. OGAC’s comments suggested that specific action in response to this recommendation would be contingent on the outcome of these discussions. OGAC also provided additional background information on PEPFAR indicators and data validation efforts. Finally, OGAC, in coordination with HHS and USAID, provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of State, the Office of the U.S. Global AIDS Coordinator, USAID Office of HIV/AIDS, HHS Office of Global Affairs, CDC Division of Global HIV/AIDS, and appropriate congressional committees. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. In response to directives in the Consolidated Appropriations Act of 2008 and the Tom Lantos and Henry J. Hyde United States Global Leadership Against HIV/AIDS, Tuberculosis, and Malaria Reauthorization Act of 2008 (2008 Leadership Act) to review global HIV/AIDS program monitoring, this report (1) describes the Office of the U.S. Global AIDS Coordinator’s (OGAC), U.S. Agency for International Development’s (USAID), and the Centers for Disease Control and Prevention’s (CDC) key procedures for planning and reporting on the President’s Emergency Plan for AIDS Relief (PEPFAR) program performance and (2) examines published PEPFAR performance plans and reports. To describe OGAC, USAID, and CDC procedures for planning for, and reporting on, PEPFAR program performance, we reviewed PEPFAR and agency-specific guidance documents such as PEPFAR country operational plan guidance for fiscal years 2009 and 2010, Next Generation Indicators guidance, and semi-annual and annual program results guidance; USAID’s Automated Directives System guidance; and CDC’s quarterly program measures guidance. We also reviewed documents provided by OGAC, USAID, and CDC to describe their organizational structures and procedures, and we interviewed OGAC and USAID officials in Washington, D.C., as well as CDC officials in Atlanta, Georgia. To categorize USAID Office of Inspector General (OIG) audit report recommendations related to program performance planning and reporting, we identified 24 USAID OIG reports from fiscal years 2008 through 2011 published on USAID’s Web site. We also interviewed cognizant USAID OIG officials in Washington, D.C., and two regional offices in Africa (Pretoria, South Africa, and Dakar, Senegal) to gain additional information on past and current USAID OIG audit work on PEPFAR. We identified the countries and programs covered by each report and found that the 24 reports covered prevention, treatment, and care programs in 19 PEPFAR countries: Botswana, Cambodia, Côte d’Ivoire (two reports), Dominican Republic, Ethiopia, Ghana, Guyana, Haiti, Kenya (two reports), Mozambique (two reports), Namibia, Nigeria, Rwanda, South Africa, Tanzania, Uganda, Vietnam, Zambia (two reports), and Zimbabwe. In addition, one USAID OIG report reviewed USAID’s implementation of PEPFAR’s New Partners Initiative. We identified the recommendations in these reports and entered this information into a spreadsheet database. To identify and describe types of performance management-related themes, we utilized the Joint United Nations Programme on HIV/AIDS (UNAIDS) 12 components of a national HIV monitoring and evaluation system as categories. (See app. II for a list of these categories and their definitions.) Two analysts independently assigned each recommendation to not more than two of these categories. The two analysts then met to discuss the results of their analysis; in cases where the analysts’ categorizations differed, the analysts discussed and came to agreement on final categories. We determined that 74 recommendations addressed one category, and 56 addressed two of the categories—totaling 130 recommendations. We also determined that 43 recommendations—related, for example, to disposal of expired medications and to requirements for USAID branding and marking—did not fall into any of the categories. Furthermore, three of the 12 categories—national multisectoral monitoring and evaluation plan; annual costed national monitoring and evaluation workplan; and advocacy, communication, and culture for monitoring and evaluation—were not used to categorize any of the recommendations. To determine the extent to which USAID has taken steps to implement the recommendations, we interviewed cognizant USAID OIG officials in Washington, D.C., to gain understanding of recommendation tracking, and we analyzed data provided by USAID specifying dates for final action, target dates for final action, and target dates for management decisions. To examine published PEPFAR performance plans and reports and the extent to which they adhere to established practices, we identified OGAC’s, USAID’s, and CDC’s most recent publicly available annual performance plans and reports: for OGAC, the PEPFAR annual operational plans and annual reports to Congress for fiscal years 2009 and 2010; for USAID, the “Foreign Operations FY 2010 Performance Report, FY 2012 Performance Plan” that it issued with the Department of State as part of their joint congressional budget justification for fiscal year 2012; and for CDC, the “Fiscal Year 2012 Justification of Estimates for Appropriation Committees” and “FY 2012 Online Performance Appendix.” We systematically reviewed these documents using a matrix with a series of questions about key performance management practices, as defined by the 2008 Leadership Act, the Government Performance and Results Act of 1993, and previous GAO work. We also interviewed OGAC, USAID, and CDC officials in Washington, D.C., and Atlanta, Georgia, regarding the information contained in these documents and the procedures they followed to produce them. We conducted this performance audit from October 2010 to July 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To identify and describe types of performance management-related themes in our analysis of USAID OIG report recommendations (see app. I), we used as categories 12 components of a national HIV monitoring and evaluation system established by UNAIDS. Table 1 provides a list of these categories and their descriptions. According to OGAC’s Next Generation Indicators guidance and OGAC officials, PEPFAR country and regional teams are to use 32 essential indicators for annual target setting and regular reporting to OGAC. The guidance distinguishes between direct and national indicators. National indicators are intended to measure the collective achievements of all contributors (i.e., host country government, donors, and civil society) to a program or project, while direct indicators are intended to measure results attributable to PEPFAR alone. Table 2 provides a list of these indicators. OGAC provides information on PEPFAR program results in its annual reports to Congress, which are typically published in February. USAID reports on PEPFAR program results in the “Foreign Operations FY 2010 Performance Report FY 2012 Performance Plan” that it issued with the Department of State as part of their joint congressional budget justification for fiscal year 2012 (State-USAID APR/APP). CDC reports on PEPFAR program results in its “Fiscal Year 2012 Justification of Estimates for Appropriation Committees” and “FY 2012 Online Performance Appendix.” The indicators used to report on PEPFAR results are a subset of the 32 essential reported indicators listed in appendix III. Table 3 summarizes PEPFAR results for fiscal year 2010 reported by OGAC, USAID, and CDC in their most recent performance reports. In addition to the contact named above, Audrey Solis (Assistant Director), Todd M. Anderson, David Dornisch, Lorraine Ettaro, Brian Hackney, Fang He, Reid Lowe, Grace Lui, and Reina Nuñez made key contributions to this report. Lisa Helmer and Keesha Egebrecht provided technical assistance. Global Health: Trends in U.S. Spending for Global HIV/AIDS and Other Health Assistance in Fiscal Years 2001-2008. GAO-11-64. Washington, D.C.: October 8, 2010. President’s Emergency Plan for AIDS Relief: Efforts to Align Programs with Partner Countries’ HIV/AIDS Strategies and Promote Partner Country Ownership. GAO-10-836. Washington, D.C.: September 20, 2010. President’s Emergency Plan for AIDS Relief: Partner Selection and Oversight Follow Accepted Practices but Would Benefit from Enhanced Planning and Accountability. GAO-09-666. Washington, D.C.: July 15, 2009. Global HIV/AIDS: A More Country-Based Approach Could Improve Allocation of PEPFAR Funding. GAO-08-480. Washington, D.C.: April 2, 2008. Global Health: Global Fund to Fight AIDS, TB and Malaria Has Improved Its Documentation of Funding Decisions but Needs Standardized Oversight Expectations and Assessments. GAO-07-627. Washington, D.C.: May 7, 2007. Global Health: Spending Requirement Presents Challenges for Allocating Prevention Funding under the President’s Emergency Plan for AIDS Relief. GAO-06-395. Washington, D.C.: April 4, 2006. Global Health: The Global Fund to Fight AIDS, TB and Malaria Is Responding to Challenges but Needs Better Information and Documentation for Performance-Based Funding. GAO-05-639. Washington, D.C.: June 10, 2005. Global HIV/AIDS Epidemic: Selection of Antiretroviral Medications Provided Under U.S. Emergency Plan is Limited. GAO-05-133. Washington, D.C.: January 11, 2005. Global Health: U.S. AIDS Coordinator Addressing Some Key Challenges to Expanding Treatment, but Others Remain. GAO-04-784. Washington, D.C.: June 12, 2004. Global Health: Global Fund to Fight AIDS, TB, and Malaria Has Advanced in Key Areas, but Difficult Challenges Remain. GAO-03-601. Washington, D.C.: May 7, 2003.
U.S. assistance through the President's Emergency Plan for AIDS Relief (PEPFAR) has helped provide treatment, care, and prevention services overseas to millions affected by HIV/AIDS. In 2008, Congress reauthorized PEPFAR with the Tom Lantos and Henry J. Hyde United States Global Leadership Against HIV/AIDS, Tuberculosis, and Malaria Reauthorization Act of 2008 (2008 Leadership Act). The act requires the Department of State's Office of the U.S. Global AIDS Coordinator (OGAC) to report to Congress annually on PEPFAR performance. The U.S. Agency for International Development (USAID) and the Health and Human Services (HHS) Centers for Disease Control and Prevention (CDC) also report on PEPFAR program performance. Responding to legislative directives, GAO (1) described key procedures for planning and reporting on PEPFAR performance and (2) examined published PEPFAR performance plans and reports. GAO analyzed performance management documents and interviewed officials at OGAC, USAID, and CDC. Officials in several offices and divisions in OGAC, USAID, and CDC coordinate and manage PEPFAR program planning and reporting procedures at headquarters and in PEPFAR countries and regions. These procedures, which include PEPFAR-wide annual operational planning and periodic results reporting, support internal agency-specific program management as well as provide information for external reporting on PEPFAR results. OGAC, USAID, and CDC publicly issued plans and reports on PEPFAR performance in recent years consistent with 2008 Leadership Act requirements and GPRA practices; however, two key elements are lacking. First, although OGAC has internally specified annual performance targets, its most recent annual reports to Congress did not identify these targets or compare annual results with them. According to the 2008 Leadership Act, OGAC's annual reports on PEPFAR program results must include an assessment of progress toward annual goals and reasons for any failure to meet these goals. In addition, the Government Performance and Results Act (GPRA) of 1993 calls for federal agency performance reports to compare program results with established targets. Performance documents published by USAID, jointly with State, and by CDC report program targets and results for two and four PEPFAR indicators, respectively. Second, OGAC's most recently published performance plans and reports do not provide information on efforts to validate and verify reported data, while USAID's and CDC's published performance documents cite such efforts by OGAC. In addition, none of the plans or reports refers to noted data reliability weaknesses or efforts to address these weaknesses. GPRA and prior GAO work emphasize the importance of providing information in public performance documents on data verification and other efforts to address identified weaknesses. GAO recommends that OGAC include in its annual report to Congress (1) comparisons of annual PEPFAR results with established targets and (2) information on efforts to verify and validate PEPFAR performance data and address data limitations. OGAC partially agreed with the first recommendation, pending discussions with stakeholders about implementation issues and consequences, and agreed with the second recommendation.
The F/A-18 is a modern, first-line fighter and attack aircraft used by both the Navy and the Marine Corps. Each F/A-18 is periodically inspected to determine whether it needs to be sent to a depot for maintenance and repairs that cannot be performed at the squadron level. The depot maintenance specification for the F/A-18 is called the Modification, Corrosion, and Paint Program (MCAPP) and consists of inspections to identify needed repairs, the actual repairs, and the incorporation of needed aircraft modifications. Prior to fiscal year 1994, the Navy assigned all F/A-18 MCAPP work to the North Island depot. In an effort to minimize costs, the Navy decided in 1992 to subject its F/A-18 MCAPP maintenance to public/private competition. The competition package consisted of an expected quantity of 72 MCAPPs with minimum and maximum quantities of 36 and 90 MCAPPs in the first year, and options to continue the contract for up to 4 additional years. The minimum, maximum, and expected quantities were lower for each successive option year, and the estimated value of the contract if all options were exercised was about $61 million. North Island, Ogden, and two private contractors submitted bids. Ogden’s was substantially lower than the others and the Navy cost-evaluation team generally found the bid to be well-supported. Ogden was awarded the contract on August 24, 1993, and started work on the first F/A-18 MCAPP on December 8, 1993. The Air Force subsequently was informed that it would only get 36 MCAPPs, the minimum number in the competition package, because the Navy wanted to maintain core capability at North Island.Although the Air Force attempted to have Ogden assigned as the source of repair designation for the F/A-18, the Navy, with the Office of the Secretary of Defense (OSD) approval, continued to maintain F/A-18 aircraft maintenance at North Island. Thus, the MCAPP workload was split between the Navy depot and the Air Force depot. Between August 1993, when Ogden was awarded the F/A-18 contract, and November 1994, when the last F/A-18 was inducted at Ogden, North Island inducted 34 F/A-18s and Ogden 36. Navy core analysis data indicates the core capability for the F/A-18 is 18 aircraft. Following the competition, the Navy reengineered its work processes at North Island and reduced its cost of the F/A-18 repair work. In September 1994, the Navy began evaluating whether to exercise its option for the second year of the F/A-18 contract. North Island submitted a proposal to give it the F/A-18 workload that otherwise would have continued at Ogden. Since the Navy was planning to add additional maintenance requirements to the MCAPP repair specification, the contracting officer asked Ogden to provide a bid for the additional work. According to Ogden officials, they were not told that this bid was to support a competitive comparison with North Island. Title 10 U.S.C. 2469 requires DOD to use competitive, merit-based procedures before depot-level work valued at $3 million or more can be moved from one DOD depot to another or from a DOD depot to the private sector. In response to this requirement the Navy, in December 1994, prepared an analysis that compared the estimated quality, schedule, and cost of MCAPP work at Ogden and North Island. The Navy concluded that quality was the same at both activities but that North Island could perform the work in fewer days and at less cost to the government. As a result, the Navy decided not to exercise its option for the second year at Ogden, but rather to consolidate all F/A-18 MCAPP work at North Island. The Navy’s decision to consolidate F/A-18 work was based on its analysis of F/A-18 MCAPP schedule and cost differences between Ogden and North Island. In evaluating schedule differences, the Navy compared the estimated days required by each activity to complete an MCAPP. In evaluating cost differences, it compared the estimated total cost to the government for each activity to complete an MCAPP by estimating the labor hours, the labor-hour rate, and the resulting total cost at each activity. The cost analysis included labor and overhead costs but excluded direct material costs, which the Navy stated should be the same at both activities. The cost analysis also excluded airframe modification costs performed concurrently with MCAPP work because modifications vary considerably from airframe to airframe. Details of the Navy’s December 1994 analysis, including the adjustments made to Ogden and North Island data, follow. Also, appendix II summarizes the cost comparison made by the Navy. The Navy attempts to minimize the time each aircraft is out of service for depot maintenance because of readiness concerns and to help minimize the number of aircraft required for the maintenance pipeline. In its comparison of the time Ogden and North Island took to complete an F/A-18 MCAPP, the Navy used the number of repair days bid by each activity. Ogden had bid 143 days to complete an MCAPP II and North Island 110 days. Based on this comparison, the Navy concluded that North Island could complete an MCAPP in less time than Ogden. The Navy made no adjustments to the repair days bid by each depot. However, it noted that Ogden’s average repair days on completed F/A-18s were greater than its bid while North Island’s average repair days were less than its bid. Ogden delivered only the first aircraft ahead of schedule, with the next 15 delivered between 17 to 217 days late. Ogden officials estimated that the remaining 20 aircraft would be delivered between 35 to 298 days late. Navy officials acknowledged that the Navy caused some Ogden schedule delays through such actions as late delivery of parts and late approval of funding, but did not quantify the extent of these delays. In its review of North Island production, the Navy developed turnaround data for North Island using only the last six F/A-18 MCAPPs. This data supported a turnaround time of 107 days for those aircraft. However, a review of production schedules for all F/A-18 MCAPPs completed at North Island during fiscal year 1994 revealed that the average turnaround time over that period was 269 days—almost 2-1/2 times longer than the 110-day bid submitted by North Island. Navy officials noted that process improvements at North Island had significantly reduced the F/A-18 turnaround time, and this improvement was demonstrated by the production turnaround time achieved for the six MCAPPs used as a basis for the Navy analysis. While Ogden’s production turnaround time was also significantly longer than its bid supported, Ogden officials gave us data showing that the depot’s late delivery of 15 of the first 16 aircraft was caused primarily by a number of Navy actions. Air Force officials cited approval of engineering repair proposals as the most frequent reason for work delays. For repairs not covered by maintenance manuals provided the Air Force, Ogden’s engineers must design and submit for approval proposed repairs under the Rapid Response Repair (3R) System to the F/A-18s Cognizant Field Activity at the North Island Naval Aviation Depot. This approval is required before proposed repairs can be made. Ogden officials reported that work delays occurred because it often took several weeks to obtain required technical information from the Navy’s Cognizant Field Activity before a repair could be designed and once designed, it took too long to get Navy approval. Usually proposed repairs had to be submitted multiple times before being approved. Data provided by Ogden showed that they had experienced delays of 11 to 90 days in obtaining 3R approval on 18 of the 36 aircraft inducted as of March 1995. North Island officials said that the time they took to respond—but not necessarily approve—Ogden’s 3R requests met or was less than the time called for in the contract and that the average response time was 2.7 days. They also noted that the response time in support of Ogden was better than the response time required to process 3Rs for the North Island depot. We noted that 3R response times do not reflect the time required to obtain the technical data needed to prepare the proposal or the number of times the proposal is resubmitted before being approved. Late funding by the Navy was the second most frequent reason Ogden cited for work delays. Before applying an engineering modification called for by the contract, the Navy F/A-18 program office had to approve the expenditure of procurement funds for this purpose. According to Ogden officials, work on 28 of the 36 aircraft was delayed from 5 to 259 days because of late funding. F/A-18 Program Office officials stated that late funding was a problem caused by an archaic funding system. This funding system was not used for similar work by the Navy’s North Island depot. Data provided by the Air Force indicated that late receipt of replacement parts was the third most significant cause of work delays at Ogden. Contractually, Ogden must obtain replacement parts from the Navy supply system; however, the system was frequently unable to provide items when Ogden needed them. Aircraft processing records show that 17 of 36 aircraft experienced work delays because replacement parts were not available from the Navy supply system when needed. Delays caused by late replacement parts ranged from 2 to 52 days. Navy officials acknowledged that F/A-18 spare parts shortages are a Navy-wide problem, but they said that since North Island is the approved overhaul depot for F/A-18 components, parts shortages had less of an impact on North Island’s F/A-18 delivery schedule. Ogden officials noted that they had the capability to repair some of the parts had they been allowed to do so. Ogden incurred other significant delays because the Navy required the reinspection of certain aircraft using a procedure that included the removal of wings from some completed aircraft. Nine aircraft were delayed from 14 to 30 days—a total of 211 days—because the Navy required Ogden to remove the wings and reinspect the wing attach lugs for possible damage, after an Ogden crew used an unapproved mechanical process to remove an anticorrosive compound from the wing lugs on one of the earlier aircraft. Reinspection of the aircraft in question did not find damage. All measurements were within the specifications outlined by the Navy for surface roughness and lug thickness. Three other aircraft that had been worked on by the crew using the unapproved procedure were also reinspected and showed no evidence that an unauthorized machine process had been used or that the wing lugs were out of tolerance. Although no damage was found, the Navy required Ogden to inspect five additional aircraft, even though these aircraft had not been worked on by the same crew. These inspections produced no evidence of the unauthorized machine process and only one out-of-tolerance condition concerning surface roughness. The cause of that discrepancy, a small scratch, could not be determined by either the Navy or Ogden. Air Force and Defense Contract Management Command (DCMC) officials questioned the need to require the removal of wings on completed aircraft. The Navy believes that requiring Ogden to remove the wings and reinspect the lugs was justified because the area involved was a flight critical structure from an aircraft safety standpoint. According to Ogden officials, various work delays caused by the Navy prompted over 100 letters to the Navy contracting officer asking for corrective action on various problems causing the delays and also asking for schedule extensions resulting from prior delays. The Navy contracting officer did not respond to any of the letters, and only after the F/A-18 MCAPP contract was terminated did it allow the DCMC to act on Ogden requests for schedule extensions. According to DCMC officials, on other programs they are routinely allowed to modify schedule delivery dates when conditions are appropriate. These officials noted that a private contractor may have stopped work. Ogden officials attempted to analyze the collective impact of various delays on the depot’s ability to repair aircraft. They noted that various delays were ongoing concurrently, but their analysis revealed that one aircraft experienced delays attributed to the Navy totaling 546 days. Noting that they overlapped for the various conditions, Air Force officials concluded that work was delayed 82 days while 6 3Rs were being processed, 259 days because funding was approved late, and 205 days for other reasons such as late receipt of replacement parts and a faulty engineering repair solution. Navy officials dispute that delays were caused by the length of 3R processing times and noted that delays due to the lack of spare parts in critical supply were also experienced across the entire Navy. The Navy’s first step in analyzing F/A-18 MCAPP costs at Ogden and North Island was to compare MCAPP labor-hour requirements. However, for several reasons making such a comparison is difficult. First, the two activities used different MCAPP repair specifications, which affect the labor hours required to perform the work. After the competitive contract was awarded to Ogden, the F/A-18 repair specification was changed to incorporate additional inspection requirements. The extra inspections normally identify additional repair tasks, which also require more labor hours to complete. North Island has used the revised repair specification, called MCAPP II, since May 1994, while Ogden had continued to use the original MCAPP specification as called for by the terms of the contract. We noted that during fiscal year 1994, the Navy completed 82 MCAPPs using the same specification as that used by Ogden and that the labor hours required to complete these aircraft averaged 7,299 labor hours. F/A-18s inducted at North Island after December 18, 1993, the date when the first Ogden F/A-18 was inducted, averaged 6,819 labor hours. Navy officials stated that process improvements to reduce the labor hours required at North Island to complete an F/A-18 MCAPP had only been completed in time to fully benefit F/A-18 MCAPP II aircraft, which were first inducted in April 1994. We determined that although the MCAPP II specification was expected to require more labor hours than MCAPP I, the average labor hours for the 6 MCAPP II aircraft completed before the time of the Navy’s analysis was 5,684—a significant reduction over the historical average time required for MCAPP Is at North Island. The Navy attributed these labor-hour reductions to increased efficiencies at the North Island depot—primarily because it reduced the number of components that were overhauled concurrently with the MCAPP. Second, differences in the number of carrier-based and land-based F/A-18s repaired also complicate a labor-hour comparison by each activity. Navy officials stated that this comparison is important because the F/A-18 repair specification makes a distinction between carrier-based and land-based F/A-18s. Specifically, the repair specification requires more inspections for carrier-based F/A-18s because they normally are subjected to a harsher environment and more physical stress due to salt water, catapult launches, and arrested landings. According to the Navy, the additional inspections normally result in more repair work. At the time of the Navy’s analysis, North Island had recently completed six carrier-based F/A-18s while Ogden had completed two carrier-based and five land-based F/A-18s. The Navy did not use data from the carrier-based aircraft repaired at Ogden. Third, differences in F/A-18 component repair procedures at each activity also complicate a labor-hour comparison between the two activities. Under terms of the Ogden contract, most components requiring repair are to be exchanged for replacement components provided by the Navy for installation on the aircraft. At North Island, many components requiring repair are to be repaired concurrently with the aircraft and then reinstalled on the aircraft. The additional labor hours used by North Island for component repairs are included in the total labor hours charged to each aircraft. North Island officials told us that the biggest factor influencing its process improvement was that the depot significantly reduced the number of components that were overhauled concurrently with MCAPP. Rather than routinely overhauling components that had been removed from aircraft being inducted for an MCAPP, revised procedures called for only overhauling components if they did not meet technical requirements. Fourth, there are differences in the amount of work required on each aircraft. Each aircraft is unique and the amount of needed repairs identified during the inspections varies considerably from aircraft to aircraft. The use of averages tends to normalize these variations in work content. However, the averages used in the Navy’s analysis were based on small quantities of completed aircraft at both depots. As a result, the averages may not have normalized labor-hour differences caused by differences in the repairs required on each aircraft. This problem probably affected analysis of the Ogden hours even more than North Island since Ogden had not advanced far enough along in the F/A-18 repair program to reach a normalized production level. Finally, there are other differences between the activities that affect labor hours used for MCAPP work that also complicate a labor-hour comparison. For example, there are differences in (1) the cost accounting systems used to collect labor-hour expenditures, (2) operation and administration procedures for work performed, and (3) the numbers of F/A-18 MCAPPs completed in the past that affects the comparability of performance data and the potential for future improvement. The Navy made several adjustments to the historical data used in its analysis. Through the adjustments, the Navy estimated the labor hours required by each depot to perform an MCAPP II on a land-based F/A-18 with no concurrent repair of components. These adjustments increased Ogden’s labor hours and reduced North Island’s labor hours below Ogden’s. The Navy did not make adjustments to account for known factors causing labor-hour increases at Ogden, such as delays caused by the nonavailability of parts, time awaiting approval of proposed maintenance actions, a Navy required wing removal and reinspection, front-end training time, or increases due to the type of contract administration used for the Ogden repair work. The Navy also did not recognize Ogden’s potential for reducing labor hours as additional aircraft were produced or consider basing its land-based versus carrier-based analysis on Ogden aircraft results rather than North Island’s even though Ogden had produced both types. As the starting point for Ogden, the analysis used the 3,069 average labor hours approved for payment by the contract administrator for the 5 land-based F/A-18s completed by Ogden at the time of the analysis. Actual labor-hour expenditures at Ogden were not used because the work at Ogden was being administered similar to a contract with a private company. As a result, the Navy said it only had access to the labor hours approved for payment by the contract administrator. The Navy made three adjustments to the Ogden average. First, the contract administrator had made a decision in November 1994 to approve 12 to 17 percent additional labor hours for personal, fatigue, and delay time associated with certain work at Ogden. Based on this decision, the Navy adjusted some of Ogden’s proposed labor hours using a 12-percent factor, which added 153 hours. In January 1995, Ogden formally requested approval for compensation for additional hours to reflect personal fatigue, and delay time using a 16.7-percent factor. The Navy made a second adjustment to add the labor hours required for the additional MCAPP II inspection requirements. In September 1994, the Navy asked Ogden to submit a bid for these additional requirements, and in response, Ogden submitted a proposal for 228 additional labor hours. Based on this proposal, the Navy added 228 hours to Ogden’s labor-hour estimate. The third adjustment made to Ogden’s labor hours added 480 hours estimated for the additional repair work that would result from the additional MCAPP II inspections. When Ogden bid the 228 hours for MCAPP II inspections, the activity did not submit a bid for the needed repair work that would be identified during the inspections. The F/A-18 field engineering activity that developed the MCAPP II specification estimated that 3 labor hours of repair work would result from each additional inspection hour. Use of this ratio would have added 864 labor hours to the Ogden average. Navy officials stated that to be conservative in making this adjustment, they used a ratio of 2.1 repair hours for each inspection hour. This ratio was based on the approved labor hours for inspections and the resulting repair work on Ogden’s five completed land-based F/A-18s. While the second and third adjustments appear logical, we could not determine whether Ogden would have needed all of the additional time related to these adjustments. As previously discussed, North Island reduced both its turnaround time and labor hours for MCAPP II aircraft. We did not analyze the two specifications to determine if there were changes that might have reduced the production time at Ogden as it had of North Island. The Navy, as previously noted, did not adjust Ogden’s hours to reflect improved performance normally expected from the learning curve as a depot gains experience with a new workload. DCAA officials told us learning curve analyses are routine in their normal bid proposal evaluations. Learning curve theory states that, for repetitive tasks, as quantities double, the time to perform a task reduces at relatively constant percentages. Over time, the quantities required to reach a doubling can become very large, causing an apparent significant slowing of the rate of learning. On the F/A-18 MCAPP, North Island would have already experienced a significant amount of learning due to the quantities performed. Ogden, on the other hand, having just begun the program should have been expected to experience significant learning (decreases in hours) if the program had continued. According to DCAA officials, in projecting future labor-hour requirements at Ogden, use of a learning curve would have been appropriate since Ogden’s hours for its first few aircraft were being compared with those of North Island, which already had many years performance experience. Navy officials stated that the data on approved labor hours provided by DCMC provided no indication of a learning curve because so few aircraft had been completed. As the starting point for North Island, the Navy used the 5,684 average labor hours expended on the last 6 completed F/A-18s at North Island. All six F/A-18s were carrier-based aircraft, and all were repaired using the MCAPP II specification. The labor-hour average for these aircraft represents a significant decrease in the historical labor hours expended by North Island for MCAPP work. For example, in fiscal year 1994, North Island completed 82 MCAPPs at an average of 7,299 labor hours. The 5,684 labor-hour average for the last 6 completed aircraft represents an average decrease of 1,618 labor hours, or 22 percent less than each completed MCAPP I, even though the MCAPP II specifications require additional hours for inspection and repairs. North Island officials attributed labor-hour reductions to process improvements identified as a result of the public-private competition for F/A-18 MCAPPs. After the competition, North Island made a detailed review of its F/A-18 repair operations with a view to reducing costs, including visits to Ogden to review that depot’s processes and procedures. Although North Island lost the competition, the changes were incorporated into the depot’s operations for the F/A-18 core aircraft that were not included in the competition package. Changes that reduced labor and processing time included establishing central approval authority for recommended repair tasks, conducting daily progress meetings between the managers and artisans at the site of each aircraft in the plant, reducing component repair time by only repairing the items needed for safe operation instead of completely overhauling the entire component, and moving work crews to each aircraft as work progressed instead of physically moving the aircraft to different work stations. North Island data indicated that repair costs for the six MCAPPs used as a basis for the Navy’s analysis were 37 percent below previous F/A-18 MCAPP costs at this depot. The Navy made 2 adjustments to the North Island 5,684 labor-hour average. First, it reduced the average by 493 hours to account for the labor hours used to repair components. Ogden replaces broken components but does not repair them. The adjustment was less than the average labor hours historically used for component repairs. However, the Navy stated that North Island adopted new repair practices that reduced component repairs. We noted that the Ogden labor hours included some off-equipment component repair work, but these hours were not separately identified for purposes of the Navy analysis. Navy officials said they do not classify this work as depot-level repair; furthermore, they noted that Ogden had not been approved by the Navy to do any depot-level component rework. The second adjustment was made because Ogden’s five aircraft used in the comparison were land-based and North Island’s six aircraft were carrier-based. The Navy stated that historical data at North Island showed that land-based F/A-18 MCAPPs on average require 27.5 percent fewer labor hours than carrier-based F/A-18s because of fewer corrosion and structure problems. To estimate the labor hours that North Island would have used if all aircraft had been land-based, the Navy reduced the average by 27.5 percent, or 1,430 labor hours. To differentiate between land-based and carrier-based aircraft, the Navy used as a measure the number of catapult launches. Aircraft with at least 200 catapult launches were said to be carrier-based and those with less were said to be land-based. We identified several factors that would question the appropriateness of the Navy’s large reduction of North Island labor hours based upon its carrier- versus land-based analysis. For example, Ogden was operating under different instructions from the Navy regarding how to define a carrier-based aircraft. Thus, Ogden incurred additional labor hours for inspections using criteria defined in the MCAPP inspection procedures even though the aircraft would not have qualified as a carrier-based aircraft using the 200 catapult launch criteria. Additionally, the 27.5-percent reduction was not well-supported based on an analysis of North Island data. We also noted that at the time the Navy collected data for its analysis, Ogden had already repaired several aircraft that had over 200 catapult launches. The Ogden data showed a 7-percent increase in hours for carrier-based aircraft. Further, in isolating the relative influence of various factors on the number of labor hours required to perform an MCAPP, we found that other factors such as number of flying hours and time since previous major repair appeared to be much more statistically meaningful indicators of how many hours would be required to conduct an MCAPP. The Navy did not ask DCAA to review the proposed labor hours or to determine if its adjustments to those hours were supported. Navy officials noted that this was consistent with the process used in the original competition in which DCAA assessed rates and Naval Air Systems Command assessed labor hours. However, we noted that DCAA’s audit reports of Ogden and North Island’s original bids included evaluations of both rates and hours. DCAA was responsible for ensuring that bids prepared by public depots included all relevant costs. With labor-hour estimates determined, the Navy then estimated the rates, or cost per hour, to perform MCAPP work at Ogden and at North Island. To do this, the Navy asked DCAA to review actual F/A-18 costs at both depots and estimate actual rates for fiscal year 1995 work. The Navy requested DCAA to complete its review and report the results in less than 1 week. Although DCAA complied with the request, the resulting reports were highly qualified. DCAA reported that its review was limited to verifying reported actual cost information and making an estimate of actual costs for the next year. DCAA reported that it did not have sufficient time to perform the procedures necessary to comply with generally accepted government auditing standards. DCAA officials stated that in at least one case their analysis was based on incomplete data. DCAA initially reported that Ogden’s expected actual hourly rate for fiscal year 1995 for F/A-18 MCAPP work was $81.00. After considering additional information provided by Ogden officials, DCAA revised its estimate to $68.83. In its analysis, the Navy used the $68.83 rate for Ogden with no adjustments. DCAA officials later reported that the Ogden rate should have been $61.68. They stated that the initial rate estimate did not fully discount the impacts of first-year training and the Navy requirement to perform wing removals and reinspection on several aircraft. DCAA reported that North Island’s expected actual hourly rate for fiscal year 1995 for F/A-18 MCAPP work was $67.89. In its analysis, the Navy made several adjustments that reduced the DCAA estimated rate to $62.86, a $5.03 reduction. Navy officials stated that most of the reduction was made to provide for differences between Ogden and North Island in the accounting of certain F/A-18 material costs. Under the contract, some F/A-18 material is provided to Ogden at no cost as government-furnished material. This same material is included in North Island’s costs. The adjustments account for these differences as well as for a minor error in the accounting for building depreciation at North Island. In estimating rates at Ogden and North Island, the Navy did not fully adjust for extra costs Ogden incurred from: (1) operating under DCMC contract administration rather than a less costly interservice support agreement, (2) first-year training because the F/A-18 workload was new, (3) Navy delays in providing spare parts and approving maintenance procedures, or (4) conducting the Navy-required wing removal and reinspection procedure on several aircraft that revealed no quality problems. Navy officials stated that (1) despite the higher cost under DCMC contract management, they had a contract with Ogden that required the use of DCMC contract administrators; (2) adjustments for first-year training and reinspection costs were included in the $68.83 qualified rate estimate provided by DCAA; and (3) Ogden did not incur increased labor cost while awaiting spare parts and that repair approval procedures were timely. To arrive at the estimated cost to the government for MCAPP work at Ogden, the Navy multiplied Ogden’s adjusted average labor hours by the DCAA rate. The result was $270,502. The Navy added $9,000 to account for MCAPP II equipment that the Navy said Ogden would need to perform MCAPP II inspections. The $9,000 was calculated by dividing the $207,000 cost of the machinery by the minimum 23 F/A-18 MCAPP IIs that would be performed in fiscal year 1995. For North Island, the Navy multiplied North Island’s adjusted average labor hours by the adjusted DCAA rate. The result was $236,416, or $34,086 less than Ogden. Although the Navy’s decision to move F/A-18 MCAPP work from Ogden to North Island was based primarily on the cost and schedule differences discussed above, the Navy analysis also noted other costs associated with having MCAPP work performed at two locations. The Navy, with DOD concurrence, is requiring that F/A-18 core repair capability be maintained at a Navy depot. Thus, when Ogden won the F/A-18 competition, the Navy did not send all F/A-18 MCAPPs to the Air Force depot. Instead, North Island performed about half of the MCAPPs to maintain a Navy core capability to repair the aircraft. The Navy identified six factors associated with performing F/A-18 work at two depots that increase the total cost of the work. The Navy estimated that these factors add $43,000 to the government’s cost for each F/A-18 MCAPP accomplished at Ogden. According to the Navy, the additional costs are eliminated by consolidating all F/A-18 MCAPP work at one site. We agree there are additional costs to the government when the same work is performed at two depots. As a result of its recognition of the advantages of single-siting depot maintenance workload, in recent years DOD has single-sited numerous depot maintenance workloads that had previously been split among two or more depot activities. Nonetheless, our review indicated that quantifying these costs is difficult, and in most cases, the Navy overestimated the amounts. The six cost factors identified in the Navy’s analysis are discussed below. The Navy estimated that the difference in the days required to complete MCAPP work at Ogden and North Island would cost the government $11,000 in additional depreciation costs for each MCAPP performed by Ogden. This amount was based on Ogden’s bid of 143 days to perform an MCAPP and North Island’s bid of 110 days. As discussed earlier, we believe the Navy’s use of this factor was inappropriate. North Island’s bid reflected a substantial reduction from its yearly average and assumed that recent reductions in turnaround times would be maintained. Ogden’s bid, on the other hand, reflected delays and other factors experienced during its first year that should have been reduced or eliminated in subsequent years. The Navy estimated that engineering support costs provided to Ogden added $8,000 to the cost of each MCAPP. However, this is not an additional cost since similar engineering support is required regardless of where the repair work is performed. The Navy estimated that $1,600 in added costs per MCAPP resulted from the Navy having an on-site representative at Ogden to help oversee and monitor work. We noted that the Navy elected to have an on-site representative at Ogden, even though the contract did not require one. Also, it is not clear that all costs associated with this function were added costs to the government since the on-site representatives were from the North Island cognizant field activity and were assigned F/A-18 work regardless of where the work was performed. Travel and per-diem costs were, however, attributable to the Ogden contract. The Navy estimated that the cost of having DCMC administer the contract at Ogden added $15,700 to the cost of each MCAPP. While we did not verify these costs, we agree that if correct, the Navy’s chosen method of contract administration at Ogden was costly. However, the Navy did not have to use DCMC to administer the contract at Ogden. The F/A-18 workload could have been administered at less cost through an interservice support agreement, as called for in the DOD Cost Comparability Handbook. Thus, it was inappropriate in this case to include the DCMC contract administration costs as a differential factor for purposes of the F/A-18 analysis. The Navy estimated that the additional material costs for the Aviation Supply Office to support MCAPP work at two locations was $5,750 for each MCAPP completed by Ogden. We did not verify the Navy’s estimate of the cost. However, we noted that the Air Force and the Navy were negotiating a no-cost contract modification that would have allowed Ogden to use the Air Force supply system for the option years. While Ogden would have had to continue to rely on the Aviation Supply Office for reparable components not available through the Air Force system, its reliance on the Navy system should have been significantly reduced. The Navy estimated that the additional cost to fly each F/A-18 from Ogden to North Island was $1,090. We believe that this is not an additional cost because an aircraft should be flown from its squadron to the depot and back. Also, F/A-18s from East Coast locations would incur less costs by flying to Ogden rather than to North Island due to geographic differences. Although we could not validate most of the Navy’s estimates of specific costs associated with maintaining the F/A-18 workload at two different locations, we recognize that in recent years DOD has identified advantages from eliminating redundancies in its depot maintenance workload capability and has consolidated many depot workloads formerly accomplished in multiple locations at a single site. In general, we have supported such consolidations. The Navy made a 27.5-percent downward adjustment to North Island’s labor hours based on limited sample data. Using more current and complete data would have significantly reduced the adjustment. Without this adjustment, the Navy’s analysis would have shown North Island’s costs to be higher than Ogden’s. To determine North Island’s MCAPP labor hours, the Navy used North Island’s recent experience performing MCAPP IIs on five carrier-based aircraft. These MCAPPs reflected significant labor-hour reductions from historical levels. Ogden’s labor hours were based on its experience performing the original MCAPP work on five land-based aircraft. To adjust for any differences between land-based and carrier-based aircraft, the Navy compared labor hours on a sample of land- and carrier-based F/A-18 MCAPPs performed at North Island during the first 6 months of fiscal year 1994. The sampled MCAPPs were prior to process improvements at North Island that significantly reduced labor hours and prior to MCAPP II work. A comparison of labor-hour costs for all financially completed F/A-18 MCAPPs at North Island in fiscal year 1994 would have reduced the downward adjustment from 27.5 to 14 percent. Using a comparison of the last 6 months of fiscal year 1994, which reflects more of the current MCAPP II work, the downward adjustment would have been even less. To test the basis for the large labor-hour adjustment for carrier-based aircraft, we analyzed the approved labor hours for completing MCAPPs at Ogden for both carrier-based and land-based aircraft. We noted there was only a 7-percent difference. To understand further the relationship between catapult launches and labor hours, we also performed a regression analysis, comparing North Island catapult launches and hours, to determine how much of the change in hours is explained by the change in catapult launches. The resulting correlation was approximately 9 percent. This means that only 9 percent of the change in hours is explained in catapult launches. In other words, 91 percent of the change in hours is related to factors other than number of catapult launches, such as number of flying hours and age of the aircraft. We also performed an additional review of the hours and numbers of catapult launches. That analysis indicated that there is not a strong relationship between the number of catapult launches and the hours required for MCAPP work. We recomputed the Navy’s analysis using a 14-percent downward adjustment. As shown in appendix III, the recomputed Navy analysis shows Ogden’s cost is $272,900 and North Island’s cost is $275,900 for an F/A-18 MCAPP. Navy officials concurred with the analysis using a larger sample size provided the sample was based on all labor completed aircraft, not the more inclusive financially completed aircraft. The Navy officials commented that by using labor completed aircraft the downward adjustment would be 16.7 percent rather than 14 percent—making Ogden’s cost slightly higher. However, since labor complete figures do not capture the final total labor hours that are included in financially complete figures, the financially completed measure is more commonly used. Additionally, as previously noted, our analysis of Ogden’s labor-hour differential between carrier-based and land-based aircraft showed only a 7-percent difference. The Ogden total, shown in appendix III, included $2,379 that the Navy added for equipment that Ogden would have to purchase for MCAPP II inspections. Navy officials stated that including the equipment cost was appropriate because the contract required the equipment for the performance of MCAPP II. Ogden officials stated that they did not believe the equipment adjustment was appropriate. They noted that similar equipment had been called for as part of the MCAPP I work. However, because of the infrequency of the repair requirement for components needing the equipment, the Navy had determined it to be more economical to send the parts to North Island rather than purchase the equipment for Ogden. It is not clear why this same procedure would not have been used for MCAPP II repairs at Ogden. The recomputed Navy analysis in appendix III shows Ogden’s cost was slightly less than North Island’s. Further, if DCAA’s revised labor rate of $61.68 had been used, Ogden’s per-aircraft cost would have been more than $30,000 less per aircraft. Nonetheless, the decision may still have been made to move the workload back to North Island due to the Navy’s assessment regarding potential cost savings from consolidation. We performed a separate analysis comparing estimated costs for performing MCAPP work at Ogden and North Island using (1) the most current data available at the time of our review in March 1995, (2) actual labor hours expended by Ogden and North Island for completed MCAPPs for carrier-based F/A-18s, and (3) actual rates at Ogden and North Island based on actual costs for completed F/A-18 MCAPPs. This analysis is summarized in appendix IV. We adjusted North Island labor hours to account for the labor hours used for concurrent repair of components. We adjusted Ogden labor hours to estimate the additional labor hours required to perform MCAPP II work. Because we compared only carrier-based aircraft completed by each depot, we did not make an adjustment for differences in the proportion of carrier-based and land-based F/A-18s at each depot. We made two estimates of the total cost to the government using the adjusted labor-hour estimates and two different rate estimates. The first estimate used the actual rate at each activity for F/A-18 MCAPPs completed in fiscal year 1995. The second estimate used the actual rate at each activity adjusted for differences in accounting for material costs, the cost of Ogden F/A-18 work that was outside of normal MCAPP requirements, and the additional cost of contract administration at Ogden in dealing with DCMC. Navy officials state that since Ogden’s contract was structured with DCMC as the administrator, an adjustment is not necessary. Using the actual rates, the analysis showed that the cost to the government for F/A-18 MCAPPs was less at North Island. Using the adjusted rates, the analysis showed that the cost was less at Ogden. We did not include in the analysis an estimate for the added costs to the government from having two depots perform F/A-18 work. Also, our analysis did not account for all differences in the work historically performed at the two depots because some differences cannot be accurately quantified. Title 10 U.S.C. 2469 contains provisions that restrict the movement of depot-level maintenance work from one depot to another or to the private sector if the value of the work is $3 million dollars or more. The legislation requires that before such work is moved, the Secretary of Defense must ensure that the change is made using (1) merit-based selection procedures for competitions among all DOD depot-level activities or (2) competitive procedures for competitions among private and public sector entities. Since the value of the F/A-18 MCAPP work moved from Ogden to North Island exceeded $3 million, the decision was subject to the provisions of the legislation. In a December 20, 1994, letter, the Deputy Under Secretary of Defense for Logistics confirmed that he had reviewed the Navy’s decision and supporting analysis. The letter stated that there were only two DOD depot maintenance activities capable of accomplishing the MCAPP work, Ogden and North Island, and that the Navy had performed a merit-based analysis and selection by evaluating proposals from these activities using quality, schedule, and cost criteria. The Deputy Under Secretary stated that the decision was based on the best value to the government and satisfied the requirements of section 2469. Our review indicated that DOD has not developed guidance implementing the legislation that specifically defines the steps, processes, and analyses required for merit-based selection. In other words, the services do not have defined guidance on what they must do to ensure that decisions to move depot workload are based on merit-based selection procedures. Without such guidance, it appears that any selection decision using reasonable criteria and accurate data could be considered merit-based. In the absence of guidance, the Navy established a process it believed was merit-based by using quality, schedule, and cost criteria in comparing F/A-18 MCAPP work at Ogden and North Island. However, our review indicated the Navy’s implementation of that process had a number of shortcomings. For example, as we discussed previously, the Navy did not use the most current and complete data available in determining labor-hour differences between carrier- and land-based aircraft. Using more current and complete data significantly impacts the Navy’s analysis. In addition, the Navy only allowed DCAA 1 week to determine the rates that were used in the cost comparison. DCAA qualified the information provided to the Navy at the time and subsequent DCAA analyses have resulted in different rate estimates. Further, the Navy analysis did not adjust for the extra costs incurred by Ogden in operating under DCMC contract administration even though the work could have been performed through an interservice support agreement at less cost. The Deputy Under Secretary stated in the December letter that Ogden and North Island were the only activities considered in the selection decision because they were the only DOD activities capable of performing the F/A-18 MCAPP work. We would agree that at the time of the decision, Ogden and North Island were the only DOD activities performing F/A-18 MCAPP work. However, we question whether Ogden and North Island are the only DOD activities capable of performing the work. Other Air Logistics Centers and Naval Aviation Depots routinely provide depot-level maintenance on several other types of fighter and attack aircraft. While these activities may not have all of the equipment and skills in place to start MCAPP work immediately, it would appear reasonable that with some preparation, other DOD activities could perform the work. In view of the requirement to use merit-based selection procedures among all depot-level activities, other Air Logistics Centers, and Naval Aviation Depots could have been considered in the overall analysis. However, even if other activities had been considered, it is uncertain whether any would have submitted a proposal, and we recognize that start-up costs may have prevented other activities from being competitive. We recommend that the Secretary of Defense develop and implement guidance on using merit-based selection procedures when moving depot workload as prescribed by title 10 U.S.C. 2469. We provided a draft of this report to DOD for comment. DOD provided official oral comments. OSD officials agreed with the report’s overall conclusion that the F/A-18 MCAPP workload should be single-sited and also agreed with the recommendation. They stated that events discussed in this report demonstrate the difficulties created when one service’s depot is pitted against another service’s depot in a competitive environment. However, at the same time, they agreed that this case also demonstrates the potential cost savings that can be generated when competition motivates public depots to implement efficiencies by reengineering depot maintenance processes and workloads. Air Force officials indicated overall concurrence with the report. Navy officials agreed with the overall conclusion that single siting all F/A-18 depot workload is in the best interest of the Navy. However, they raised concerns that the report did not accurately characterize the reasons why there were differences between their and our analyses. They stated that the Navy’s analysis was based on the best information available at the time. We revised the report to reflect the Navy’s concerns by more clearly explaining the reasons for the differences between their analyses and ours. Appendix I describes our scope and methodology. As arranged with your staff, unless you announce its contents earlier, we plan no further distribution of this report until 7 days from its issue date. At that time, we will send copies of this report to the Secretaries of Defense, the Air Force, and the Navy. Copies will also be made available to others on request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report were Julia Denman, Gary Phillips, James Ellis, and Donald Lentz. To address our objectives, we performed audit work at the activities involved with the decision to move F/A-18 Modification, Corrosion, and Paint Program work: the Naval Air Systems Command, Washington, D.C.; the Ogden Air Logistics Center, Ogden, Utah; the North Island Naval Aviation Depot, San Diego, California; and the Defense Contract Audit Agency, Salt Lake City, Utah, and San Diego, California. At each activity, we interviewed responsible agency officials and examined documents and other data related to the decision. To identify the adjustments that were made to Ogden’s and North Island’s costs, we reviewed documentation supporting the Navy’s cost analysis and discussed with Navy officials the reasons for and the methodology used for each adjustment. To determine whether the data used in the analysis was accurate and verifiable, we examined source documents supporting the data and performed independent analyses to assess the accuracy of the data and the adjustments made to the data. In preparing our separate cost analysis, we obtained the most current data available based on actual costs for completed work in fiscal year 1995 and made adjustments based on supportable differences in operations at Ogden and North Island. In considering whether the decision to move the F/A-18 work was a merit-based decision as required by law, we reviewed the analysis supporting the Navy’s decision in view of the language in section 2469 of title 10, U.S.C., as amended by section 338 of the fiscal year 1995 National Defense Authorization Act. We also discussed the matter with Navy officials. Our examination and analyses used cost data reported by the Air Forces’ Depot Maintenance Automated Data Systems and the Navy’s Naval Air Systems Command Industrial Financial Management System. These standardized, automated cost accounting systems provide the official cost information for the services’ depot operations. We did not make an independent assessment of the reliability of the data reported by these systems. In addition, it should be noted that the Air Force and the Navy cost systems are not compatible. There are differences between the systems in the way costs are collected and accounted for. Although we made some adjustments in the data used, we cannot state with certainty that the data used, even with the adjustments, is directly comparable and consistent. Thus, the results of our analysis must be viewed with this limitation. Our review was conducted between January and August 1995 in accordance with generally accepted government auditing standards. Excludes one aircraft the Navy included in its analysis as an MCAPP II that was actually an MCAPP I. North Island F/A-18s were repaired using the MCAPP II specification and Ogden F/A-18s were repaired using the MCAPP I specification. The adjustment estimates the labor hours needed for Ogden to perform the additional MCAPP II inspections and repair work. The adjustment provides for North Island repairing some components that are provided to Ogden as government-furnished equipment. Rates are the actual rates for completed F/A-18 MCAPPs in fiscal year 1995. The adjustment to North Island’s rate reduces the rate to account for concurrent repair of components and other material provided at no cost to Ogden. The adjustment to Ogden’s rate reduces the rate to account for extra work (wing drops) performed outside of the normal MCAPP work and to account for the estimated extra cost incurred in dealing with the contract administrator, the Defense Contract Management Command. The total cost estimates were computed by multiplying the adjusted labor hours for each activity by the rate estimates for each activity. For Ogden, $2,379 was added to each result to account for the cost of equipment needed to perform MCAPP II work. This amount was determined by dividing the cost of the equipment by the minimum aircraft that would be completed during the 4 option years of the contract. The total cost estimates do not include any estimates for additional costs to the government associated with performing work at two locations. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Navy's decision to move F/A-18 depot maintenance work from the Ogden Air Logistics Center to the North Island Naval Aviation Depot, focusing on the cost and performance indicators used to justify the move of F/A-18 repair activities from Ogden to North Island. GAO found that: (1) it is difficult to compare F/A-18 modification, corrosion, and paint program cost and performance data at North Island and Ogden because the Navy does not use the most current information when making adjustments for the amount of work completed at each depot; (2) based on its analysis, Ogden's maintenance costs are slightly lower, but the Department of Defense's (DOD) decision to retain F/A-18 repair capability at North Island is more cost-effective for workload consolidation efforts; and (3) DOD needs to define the steps, processes, analyses, and validation procedures for its future depot-maintenance decisions.
Under Superfund, the federal government can pay for site cleanups or may require the responsible parties to pay for and perform them. Often the construction of cleanup remedies will also require subsequent operations and maintenance (O&M) activities to ensure that the remedy continues to protect human health and the environment. The costs of O&M are borne by the federal government, states, and responsible parties. When the federal government pays for the cleanup, EPA’s regulations require that the states pay for most of the O&M activities. If groundwater treatment is necessary at these sites, the federal government pays 90 percent of the O&M costs for the first 10 years of such treatment and the states pay the remaining costs.At sites where no groundwater treatment is needed, EPA turns the responsibility for O&M over to the state after ensuring that the remedy is working properly. The federal government also pays for O&M activities at federal facilities that have sites on their property on the National Priorities List (NPL). When the responsible parties clean up a site, they also pay the costs of O&M activities. EPA monitors conditions and O&M activities at all these sites to determine if the sites’ O&M plan is being followed. At those sites that currently can be used only in a limited way because waste remains in the soil or groundwater, EPA’s site project managers are also required to conduct a formal review of conditions at least every 5 years—known as a “5-year review.” When Superfund was reauthorized in 1986, it called for EPA to prefer treating the waste in the highly contaminated areas of a site over containing such waste because treatment was considered to be a permanent remedy. For example, in areas where soil is highly contaminated, EPA is to prefer treating the soil (by, for example, solidifying it to immobilize contaminants or applying a vacuum system to remove contaminants) instead of containing the soil (by, for example, installing a waterproof cover over it). Nevertheless, EPA sometimes selects containment for less-contaminated areas or for waste that cannot be treated successfully or cost-effectively—for example, large volumes of landfill waste. At sites where groundwater is an actual or potential source of drinking water, the law requires that the groundwater be treated until it reaches the standards established in the Safe Drinking Water Act. Almost two-thirds, or 173, of the 275 sites we reviewed where the cleanup remedy is in place will require long-term O&M activities to ensure that the cleanup remedy continues to protect human health and the environment. Specifically, we found the following: 60 of the sites use waterproof covers of clay or other materials to physically contain hazardous waste or contaminated soil. These covers prevent exposure to the waste and reduce the level of contaminants entering the groundwater. At these sites, maintenance—such as erosion control and periodic inspections—is required for an indefinite period. (See app. I for more details on O&M activities at specific sites.) 61 of the sites pump and, in some cases, treat groundwater as the primary cleanup remedy. At these sites, pumps and treatment systems will need to be operated and maintained, the equipment kept in repair, and the groundwater’s quality monitored until the cleanup standards are reached. 30 of the sites use both waste containment and groundwater treatment technologies in combination to address surface and groundwater contamination. At these sites, erosion control, inspections, operation of pumps and treatment systems, and groundwater monitoring will be required. 22 of the sites require local governments or landowners to restrict land or water use on or near the site to protect the cleanup remedy or to prevent the public from being exposed to hazardous waste. Such controls include closing drinking water wells, prohibiting the drilling of new wells, and/or imposing restrictions on deeds. 102 of the sites require little or no O&M because EPA decided no cleanup was needed or selected a remedy that required no O&M, such as treating surface waste. Figure 1 shows the distribution of the O&M activities that will be required at the 275 sites. Containment and Groundwater Pump and Treat (30 Sites) Groundwater Pump and Treat (61 Sites) 8% Use Controls (22 Sites) Groundwater “pump and treat” requires extracting water through pumps and treating the water to reduce contaminants. Use controls require monitoring and controlling local land or water use through fencing and/or deed or other restrictions. Sites using containment and/or groundwater pump and treat may also require use controls. The percentages used in this figure reflect information on the sites as of May 1995. We estimate that the federal government, states, and responsible parties will spend $32 billion for O&M costs over the next four decades; EPA estimated that they will spend $37 billion over this period. The states and responsible parties will bear most of these costs. (See app. II for information on how these estimates were developed.) On the basis of our analysis of EPA’s O&M database, we estimate that $32 billion will be required for the O&M activities associated with the cleanup plans already approved or projected to be approved through fiscal year (FY) 2005. The sites that have already been placed on the NPL represent $25 billion, or 78 percent of that total, and the sites that will be added to the list during FY 1995 or later represent an additional $7 billion. (See app. II for a comparison of EPA’s and our methodologies for estimating future O&M costs.) While the annual O&M costs were estimated at $148 million in FY 1994, these costs will increase over time. We estimate that the annual costs to the federal government, states, and responsible parties will peak at $1 billion in FY 2010. This figure reflects (1) the substantial increase in completed cleanups requiring O&M that EPA projects by the end of the century and (2) the fact that O&M is typically expected to last at least 30 years. We expect that federal costs will become relatively level over the next few decades because EPA has to pay for O&M only at the sites where groundwater is being treated, and only for 10 years. However, the states’ costs will continue to increase as EPA turns these sites over to the states, which must continue to perform O&M activities for 20 years or more. Figure 2 shows the cumulative costs to all parties for the cleanup plans already approved or projected for approval through FY 2005. These projections are based on the site cleanup plans signed during fiscal years 1982 through 2005. If additional Superfund cleanups are planned after that period, the total O&M costs will also increase. Whether the states will be able to meet these future O&M obligations is not clear. In a recent report, we found that the states, because of their resource constraints, are already having difficulty in meeting federal environmental requirements in two water programs and in overseeing facilities handling hazardous waste. Only five of the Superfund program managers we interviewed from eight states said they had done any forecasting to determine their future O&M costs. The federal government, states, and responsible parties can expect to pay an average of $12 million over 30 years for the O&M associated with a single cleanup plan. These costs vary according to the type of activities required. For example, we found the following: When the cleanup remedy uses a technology designed to contain surface waste, the ongoing O&M activities after the containment system is built could typically cost $5 million over 30 years. When the cleanup remedy includes treating groundwater, operating and maintaining the treatment plant and water pumps after construction could typically cost $17 million over 30 years. When the cleanup remedy calls for treating surface waste or contaminated soil, additional O&M activities are not required. The actual O&M costs may eventually be greater than these estimates. When developing estimates of O&M costs, EPA generally assumes that O&M activities will be required for 30 years. However, EPA recently surveyed its regional project managers and found that about 20 percent of cleanups will require O&M for more than 30 years. For example, the sites where waste is contained require O&M activities—to inspect and repair the cover—indefinitely. Furthermore, because these containment remedies have been in place for less than 10 years, the long-term repair costs are not yet known. Groundwater treatment generally continues until the cleanup standards are met, but EPA recently concluded that many groundwater treatment systems are not as efficient as was originally hoped. As a result, more than 30 years may be required to reach cleanup goals, primarily because of contaminants in groundwater that are heavier than water and thus very difficult to extract. EPA estimates that these contaminants may be present at about 60 percent of the sites where the groundwater is contaminated. O&M for groundwater treatment constitutes the majority of the costs that the federal government, states, and responsible parties face. We estimate that the O&M costs for cleanups that only treat groundwater represent about 47 percent of the anticipated costs. Furthermore, we estimate that the O&M costs for cleanups that combine treating groundwater with containing waste represent about 36 percent of all O&M costs. For cleanup remedies in which surface waste is contained but groundwater is not treated, the O&M costs constitute about 12 percent of the costs that the federal government, states, and responsible parties will face. Figure 3 illustrates the share of the O&M costs each party will be expected to pay. Changes in EPA’s policy or in the Superfund law, particularly in the guidelines for selecting cleanup remedies, could alter future O&M costs for the federal government, states, and responsible parties. For example, in recent discussions about reauthorizing the Superfund legislation, it has been suggested that the current preference for treating rather than containing surface waste might be changed to a preference for containing waste. Such a change would most likely lead to increased O&M costs because O&M activities would be required at a higher percentage of sites than is currently the case. (See app. II for information on how other potential policy changes could affect responsibilities for O&M.) EPA is responsible for monitoring O&M to ensure that these activities are performed as planned and that the cleanups continue to protect human health and the environment. However, until recently, the agency has focused on getting sites evaluated and cleaned up rather than on monitoring those sites where the cleanup remedy is in place. EPA is responsible for two types of monitoring: (1) reviewing actions that the states and responsible parties have taken to comply with the sites’ O&M plan and (2) evaluating, at least every 5 years, the condition of certain sites where waste remains on-site. Although O&M has been ongoing at some sites for several years, EPA is just now developing guidance to monitor how the states and responsible parties perform O&M activities. In addition, EPA is significantly behind in performing its 5-year reviews. We reviewed O&M activities at 57 sites: 43 sites at which 5-year reviews had been performed (including 3 sites for which we conducted case studies) and an additional 14 sites for which we also conducted case studies. For 11 sites, we found that EPA had not been closely monitoring whether the states and responsible parties were following their required action plans for O&M. At these sites, the plan was not being followed; at some sites, conditions had deteriorated after the cleanup was completed. For example, the states or responsible parties were not maintaining the waterproof covers over contaminated soil, were allowing trees and brush to grow and potentially damage the covers, and were not performing the groundwater sampling called for in the plan. (See app. III for additional examples of EPA’s monitoring of O&M activities.) We also found a site at which EPA’s monitoring helped to prevent deterioration of the cleanup. At the Lehigh Electric site in Old Forge, Pennsylvania, EPA had removed all surface debris, equipment, and soil contaminated with PCBs. Consequently, in 1986 the site was deleted from the NPL. However, ongoing groundwater monitoring revealed that PCB contamination levels were increasing. Consequently, EPA has recommended a new study to determine the source of contamination and possible cleanup methods. EPA currently has no guidance for site project managers on monitoring O&M, but the agency plans to issue a new directive in December 1995. Without guidance on the day-to-day monitoring of O&M activities, EPA’s project managers may not be able to adequately monitor the states and responsible parties. More importantly, without guidance these project managers cannot ensure that the cleanups continue to protect human health and the environment. EPA must also complete more formal reviews at some sites at least every 5 years. The 1986 Superfund reauthorization called for these 5-year reviews to occur at certain future sites where waste remaining after the cleanup prevented unlimited access to or use of the site. Subsequently, EPA decided to also conduct these reviews at certain sites where the remedies were selected before 1986 and at sites where more than 5 years will be required to reach the cleanup goals. As noted above, these reviews are important in that they often identify when O&M activities are being neglected or conditions at the site are deteriorating. Thus, these reviews are needed to ensure that the remedy continues to protect human health and the environment. For example, the 5-year review conducted at the Kellogg-Deering Wellfield Superfund site in Norwalk, Connecticut, identified problems with groundwater sampling. The site’s responsible party was not sampling the groundwater, as required, at some wells used for monitoring. EPA’s purpose in requiring the groundwater sampling was to provide an “early warning system” to detect the migration of contaminants. As part of ongoing work at other areas of the site, EPA has now approved a sampling plan that will monitor the cleanup’s effectiveness. In another example, a 5-year review identified problems at the Mowbray Engineering site in Greenville, Alabama. No maintenance had ever been performed at the site, and trees were growing on the landfill cover that had been placed over the contaminated soil. Despite the benefits of the 5-year reviews, EPA’s Inspector General found that EPA has a significant backlog of such reviews. EPA officials told us that 66 reviews had been completed as of August 31, 1995, and that an additional 84 are due by September 30, 1995. The officials expect that most of these unfinished reviews will not meet the deadline. As a result of this backlog, the agency may not be aware of problems that may be occurring at other Superfund sites. EPA is trying to reduce the size of the backlog by verifying which sites need a review and when it is due. The agency has also decided to narrow the scope of the review at those sites where the cleanup remedy is not fully in place. EPA’s Inspector General concluded that adding 5-year reviews to the tasks for which regions have assigned annual targets could give the regions an incentive to improve performance. To address these concerns, the Assistant Administrator for Solid Waste and Emergency Response is taking measures to set more specific deadlines for 5-year reviews and to establish accountability for completing them. The majority of sites in the Superfund program will require long-term operations and maintenance, especially those sites requiring waste containment or groundwater treatment. These operations and maintenance costs will constitute a substantial portion of the funds the federal government, states, and responsible parties spend to clean up the environment even after they have paid millions of dollars to construct the required cleanup remedy. Because operations and maintenance costs largely depend on the remedies selected for Superfund sites, the level of these costs will be strongly influenced by policy decisions, such as whether the cleanup remedies emphasize treatment or containment. Although some state officials told us that they expected operations and maintenance to become a considerable burden in the coming decades, most state officials we interviewed had not attempted to forecast the actual amount of these costs. Oversight of operations and maintenance has been given a lower priority than other Superfund activities that EPA must implement and monitor. As a result, the states and responsible parties have not always performed the operations and maintenance activities required. The guidance that EPA intends to develop on how to oversee operations and maintenance activities should help to remedy this situation. Because EPA has responded to the Inspector General’s findings on 5-year reviews by developing plans to track the reviews more closely and establish accountability for completing them in a timely manner, we are not making any recommendations in this report. We provided copies of a draft of this report to EPA for its review and comment. On August 30, 1995, we met with officials from EPA’s Office of Emergency and Remedial Response—the office charged with implementing the Superfund program—to obtain the agency’s comments. These officials included, among others, the Acting Deputy Director of the Hazardous Site Control Division—the division responsible for policy on operations and maintenance. These officials told us they agreed with the facts and findings in the report and were pleased with its objectivity and accuracy. They also suggested a number of technical corrections, which we have incorporated in the report. To determine the extent of O&M required at Superfund sites, we reviewed information about the 275 sites where the cleanup remedy has been built and determined whether the sites would require O&M. To project future O&M costs to the federal government, states, and responsible parties, we used and modified an EPA database of estimates of the O&M costs associated with individual cleanup plans. We obtained this database from EPA’s Office of Emergency and Remedial Response. We combined data from this database with information in a database that we had previously developed on cleanup remedies in order to determine the O&M costs associated with different types of cleanups. In addition, we conducted case studies at 17 sites to determine the actual O&M activities and costs at these sites. We also interviewed state Superfund program managers and EPA site cleanup managers for information on O&M activities and expenditures at the sites and on the states’ financial capacity to fund O&M. We reviewed EPA’s draft guidance on O&M and the agency’s guidance on 5-year reviews; we also used information from an evaluation of 5-year reviews by EPA’s Inspector General. We interviewed EPA headquarters and regional managers about EPA’s policy, guidance, and progress on 5-year reviews. We also reviewed and evaluated 43 reports on 5-year reviews that had been completed through March 1995. We conducted our work between August 1994 and September 1995 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Administrator of EPA. We will also make copies available to others on request. Please call me at (202) 512-6112 if you or your staff have any questions about this report. Major contributors are listed in appendix IV. We conducted case studies of hazardous waste sites in order to acquire information on actual experiences with and costs for operations and maintenance (O&M). We found 17 sites where the remedy had been built and that met the following criteria: The cleanup was funded by the federal government; the Environmental Protection Agency (EPA) had constructed either a groundwater pump and treat remedy or a waste containment remedy, thus requiring O&M; and EPA had completed construction of the cleanup at least 2 years before we began this work. Table I.1 summarizes information for each of the 17 sites, including the location, type of remedy, and estimated and actual costs incurred by EPA and the states for O&M at each site. O&M costs incurred by states$282,229 over 2 months $183,787 over 2 years and 3 months $15,540 over 2 years and 6 months $117,594 over 9 years $111,026 over 1 year and 3 months $8,658 over 1 year and 2 months $1,312,832 over 3 years $17,719 over 3 months $58,849 over 4 years and 6 months $88,294 over 1 year $5,422 over 13 months $528,226 over 2 years $251,432 over 2 years (continued) EPA used estimates of O&M costs, developed as part of each cleanup plan, to forecast the total O&M costs as well as the states’ share of these costs for all current and anticipated Superfund sites. For sites expected to be listed on the National Priorities List (NPL) through fiscal year (FY) 2005, EPA estimated that the total O&M costs will be $37.3 billion and that the states will pay $11.9 billion of this total. In developing these estimates, however, EPA did not separately forecast the O&M costs that the federal government and responsible parties will be expected to pay. In addition, the estimate of average O&M costs that EPA used to forecast O&M costs did not distinguish among the types of cleanups. These costs can vary widely depending on the type of cleanup selected. We obtained EPA’s database of the O&M estimates to make additional cost projections, including (1) the O&M costs that the federal government will be expected to pay, (2) the O&M costs that the responsible parties will be expected to pay, (3) the average O&M costs for those sites with and without groundwater contamination, and (4) the proportions of the total forecast O&M costs that are for current Superfund sites and sites EPA anticipates adding to the NPL in the future. We estimated that the total O&M costs for cleanup plans expected to be signed through FY 2005 will be $32 billion, with the federal government, states, and responsible parties paying about $5, $8, and $18 billion, respectively. Our estimate of total O&M costs is lower than EPA’s estimate because we (1) used a consistent discount rate of 6 percent to better represent the actual discount rates used by EPA’s project managers to estimate present-value figures, (2) removed costs in some cleanup plans that EPA inadvertently classified as O&M costs, and (3) calculated and used a different, lower average O&M cost—$337,000 per year—for each cleanup plan as opposed to the average cost of $434,000 per year calculated by EPA. This different average annual cost resulted both from decreased O&M costs for some cleanup plans because of the lower discount rate we used and from our inclusion of cleanup plans which involved no O&M costs when calculating the average. For its analysis, EPA began with the O&M estimates for the sites with cleanup plans signed during FY 1982 through 1992. To project future O&M costs for the cleanup plans it signed during FY 1993 and 1994, in addition to those it anticipates signing during FY 1995 through 2005, EPA used an average O&M estimate of $434,000 per year for each cleanup plan. On the basis of historical data, EPA anticipates preparing 175 cleanup plans per year. The 1,105 cleanup plans signed during FY 1982 through 1992 reported O&M estimates as either present-value figures or annual figures. To use the estimates reported in present-value figures in its analysis, EPA annualized the estimates using a 10-percent discount rate in order to calculate the total O&M costs over the duration of the cleanup. Unless the cleanup plan specified otherwise, EPA assumed the O&M activities would continue for 30 years. EPA also assumed a 5-year lag between the time the cleanup plan was signed and the start of the O&M activities, unless actual data were available. To allow for comparison, EPA converted all dollar figures to 1994 dollars, using a uniform 4-percent annual rate of inflation. To estimate the states’ share of future O&M costs, EPA categorized the cleanup plans as either non-groundwater or groundwater cleanups. For EPA-funded cleanups not including groundwater contamination, EPA assumed that the states pay 100 percent of the O&M costs over the entire cleanup period. In the absence of specific data, for the EPA-funded cleanups including groundwater contamination, EPA assumed that contaminated soil or other surface waste was also being cleaned up. EPA then assumed that the O&M costs would be split evenly—50 percent to address groundwater contamination and 50 percent to address the other contamination. Consequently, during the first 10 years of the cleanup, EPA assumed that the states will pay 50 percent of the O&M costs for the surface waste and the federal government will pay the remaining 50 percent of the O&M costs for pumping and treating groundwater. For the remaining 20 years, the state will pay 100 percent of all the O&M costs for these sites. To begin our analysis, we performed quality assurance checks to ensure that the estimates of O&M costs in EPA’s database were valid and reliable. We checked a random sample of cases to see whether the estimates of these costs in the cleanup plans were recorded accurately in the database and were properly adjusted to 1994 dollars. We also checked whether cleanup plans were properly categorized as involving groundwater contamination or not. We did not find any significant discrepancies. Because we were concerned about whether the estimates in the cleanup plans were a good indicator of actual O&M costs, we compared the estimates for the 17 sites for which we performed case studies with the actual O&M costs incurred. Some costs were higher or lower than the estimates, but we did not detect any bias in one direction or the other that would affect the use of these estimates to forecast future costs. (See app. I for additional information on our case studies.) When making these checks, we learned that EPA had used a 10-percent discount rate to adjust those estimates that were reported in present-value form. However, most EPA managers had originally estimated these values using a 5-percent discount rate, as prescribed by EPA guidance issued in October 1988, although some managers used other rates. EPA’s use of a 10-percent discount rate to annualize these values thus resulted in an overstatement of the original estimates of O&M costs in the cleanup plans. EPA used this 10-percent rate following the guidelines recommended by the Office of Management and Budget. For our estimates of annual O&M costs, we used a 6-percent rate to better represent the rates actually used by all EPA project managers. For a small number of cleanup plans signed during FY 1988 through 1991, we decided to adjust the estimates of O&M costs. In particular, we determined that for these plans, EPA’s projections of O&M costs included the costs of treating surface waste. According to EPA officials, such costs are not O&M costs but rather cleanup costs. Therefore, we revised the estimates for some of these cleanup plans to reflect this correction. In order to conduct our analysis, we developed a model for estimating O&M costs that considered (1) when cleanup plans were signed, (2) who will pay O&M costs—the federal government, states, or responsible parties, (3) what type of remedy was used (groundwater treatment or not), and (4) whether the costs are for current or future NPL sites. Our model projected future O&M costs for cleanup plans signed during FY 1993 through 2005 on the basis of plans signed after Superfund amendments passed in October 1986 because the changes affected responsibilities for O&M costs. We also assumed that 45 new sites will be added to the NPL each year beginning in FY 1995. We based our assumptions about who will pay O&M costs on Superfund regulations, extensive conversations with EPA officials, and our analysis of O&M costs at specific types of sites. In our model, we assumed that the federal government’s O&M costs consist of (1) all O&M costs at federal facilities and (2) the federal portion of O&M costs for the cleanup plans at sites where EPA funds the cleanup and the remedy addresses groundwater contamination. To estimate these latter costs, we took the following steps, using 650 cleanup plans signed during FY 1988 through 1991 and their estimated O&M costs for the first 10 years: First, we estimated the groundwater treatment portion of O&M costs for cleanups addressing both groundwater contamination and surface waste. We estimated this portion to be 75 percent. We arrived at this figure by dividing the average O&M cost for the 220 cleanup plans that address only groundwater contamination by the sum of this average and the average O&M cost for the 168 cleanup plans that involved only containment of surface waste. Second, we estimated the ratio of the groundwater treatment portion of O&M costs to the total O&M costs for all 360 cleanup plans involving groundwater treatment, whether alone or in combination with surface waste containment. We determined this ratio to be 89 percent. As described above, for cleanup plans involving both groundwater treatment and surface waste containment, we assumed that 75 percent of the O&M costs are due to the groundwater treatment. For plans addressing only groundwater contamination, we assumed that 100 percent of the O&M costs are due to groundwater treatment. Finally, we estimated the federal portion of O&M costs for cleanup plans involving groundwater treatment. By statute, the federal government pays 90 percent of the total O&M costs during the first 10 years of such cleanups. Therefore, we multiplied this 90 percent by 89 percent, our estimate of the share of O&M costs represented by groundwater treatment, as described above. This calculation resulted in our assumption that the federal portion of O&M costs for EPA-funded cleanups is 80 percent for the first 10 years of cleanups involving groundwater treatment. This differs from EPA’s assumption that the federal portion is 50 percent because EPA did not go through such steps to more specifically estimate groundwater-related O&M costs. Table II.1 shows where GAO’s and EPA’s assumptions differ on the portion of O&M costs that will be paid by the federal and state governments. 80% (GAO) 20% (GAO) 50% (EPA) 50% (EPA) We assumed that the states will pay 100 percent of the O&M costs for cleanups addressing surface waste that were originally funded by EPA. For EPA-funded cleanups that include groundwater treatment, the states are assumed to pay the remainder of O&M costs that the federal government does not cover. As noted above, we estimated that the federal portion of these cleanups is 80 percent; thus, the states are responsible for the remaining 20 percent of costs for the first 10 years. After the 10th year of O&M activities, we assume that the state pays 100 percent of O&M costs. We identified all the O&M costs associated with responsible parties’ cleanups to estimate their O&M costs. We excluded the O&M costs for cleanups performed jointly by EPA and the responsible parties from estimates of the costs to the federal government, states, and responsible parties since these costs were a small portion of the total O&M costs. Our analysis of the total O&M costs is presented in table II.2. Some totals do not add because of rounding. We reviewed past and current Superfund reauthorization proposals that could affect future O&M costs. The policy changes under consideration include the following: Changing the preference for treating highly contaminated waste to also consider the option of containing this waste. Because O&M costs are associated with containing waste, not with treating waste, O&M cost responsibilities will fluctuate depending on how often containment options are used. Changing the rules on the time frames for responsible parties’ liability. The responsible parties are currently liable for cleaning up contamination that occurred before Superfund was passed in 1980. If this requirement is eliminated, the federal government’s and the states’ portions of O&M costs would increase. Changing the rules on responsible parties’ liability. The responsible parties may currently be required to pay for all site cleanup, even if they did not contribute all the waste. Proposals for reauthorizing Superfund have called for the federal government to pay for those costs that cannot be allocated to responsible parties, thus increasing the federal share of O&M costs. Changing the current O&M cost-share provisions between the federal government and the states. Recently proposed legislation would have implemented different cost-sharing provisions. Doing so would shift O&M cost responsibilities between the federal government and the states. Limiting the number of new sites added to the NPL. Proposals for reauthorizing Superfund have called for placing a cap on the sites added to the NPL in the future. If this proposal is adopted, the O&M costs for future NPL sites will be lower than the $7 billion we estimated. As stated in the report, monitoring O&M activities is important because it provides assurance that the cleanup remedies continue to protect human health and the environment. Both we, through our review of 17 case studies and our analyses of 43 5-year reviews, and EPA’s Inspector General have identified cases in which covers were not maintained and groundwater sampling was not performed as required in the O&M plans. The following cases highlight these instances. In our discussions with officials in EPA’s Region IV, we identified a significant problem in monitoring O&M activities at the A.L. Taylor site (Valley of the Drums), located in Bullitt County, Kentucky. The state is now responsible for monitoring the waterproof cover used to contain chemical waste. However, local land-use controls to prevent activities that could potentially damage the cover have not been implemented. EPA and the state have had difficulty implementing land-use controls because the site is privately owned. Implementing land-use controls could have been critical at this site because the landowner was using the site as a junkyard for cars, potentially damaging the cover. After discussions with the state, however, the landowner agreed to remove the cars. Such a situation stresses the importance of continuous monitoring. Without it, EPA may not be aware of similar problems that may be occurring at other sites. EPA’s Inspector General, during a site visit, identified a significant problem in monitoring O&M activities at the Heleva Landfill site in Lehigh County, Pennsylvania. A pond adjacent to the landfill receives much of the site’s surface water runoff. The pond overflowed onto the waterproof cover, damaging it. In addition, the project manager responsible for monitoring the site was unaware of the requirement to sample surface water, such as the pond, even though the cleanup plan required doing so at least once every 3 months. In fact, no sampling had been performed since the waterproof cover was installed in 1990. Animals had also damaged the cover by burrowing holes in it. In our analysis of reports on EPA’s 5-year reviews, we identified instances in which EPA had developed recommendations to address problems with maintaining covers. For example, at the Mowbray Engineering site in Greenville, Alabama, EPA recommended that the responsible party mow the cover regularly to prevent grass from growing too high. In addition, EPA recommended that the responsible party prevent trees from growing on top of the cover because the tree roots can potentially damage the cover. EPA also recommended that the fence surrounding the site be cleared of kudzu, a vine-like vegetation, so that the fence can be readily inspected. We also identified some sites in which EPA developed recommendations to address problems with sampling the groundwater. In EPA’s 5-year review of the Middletown Road Dump site in Annapolis, Maryland, EPA recommended that further groundwater sampling be conducted. Although EPA collected groundwater samples during its review, it could not conclude whether the groundwater was still a health threat. Therefore, additional sampling was recommended. In another example, for the Triangle Chemical Company Superfund site in Bridge City, Texas, EPA recommended that the state conduct groundwater sampling more frequently because contamination levels are still above acceptable levels. Philip Farah, Economist Fran Featherston, Senior Social Science Analyst Mary D. Feeley, Evaluator Josephine Gaytan, Information Processing Assistant Angelia Kelly, Evaluator Eileen Larence, Assistant Director Rosa Maria Torres Lerma, Evaluator Mehrzad Nadji, Assistant Director for Economic Analysis Katherine Siggerud, Evaluator-in-charge The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed operations and maintenance (O&M) activities at former or current National Priorities List (NPL) Superfund sites where remediation construction has been completed, focusing on the: (1) extent to which O&M activities are necessary at Superfund sites; (2) costs to the federal government, states, and responsible parties to perform these activities now and in the future; and (3) Environmental Protection Agency's (EPA) actions to help ensure that O&M activities continue to protect human health and the environment. GAO found that: (1) the federal government, states, and responsible parties must perform long-term O&M activities at almost two-thirds of the 275 NPL sites reviewed; (2) these O&M activities include controlling erosion of landfill covers, treating contaminated groundwater, and implementing and enforcing land and water use restrictions; (3) the nationwide cost of current and future O&M activities will be about $32 billion through fiscal year 2040, much of which will be borne by the states and responsible parties; (4) the cost of a given site remedy will depend mainly on what remedy type EPA selects and the duration of the O&M activities; (5) until recently, EPA has focused on the evaluation and cleanup of Superfund sites, but EPA monitoring of O&M activities is crucial because states and responsible parties do not always follow their O&M plans and site conditions can deteriorate; (6) EPA is just now developing guidance for site project managers on monitoring O&M activities to ensure that O&M plans are followed; and (7) EPA has a significant backlog of 5-year reviews and may not be aware of deteriorating conditions at some sites.
Because U.S. school districts differ a great deal in size and scope, their involvement with the federal government can also vary. Districts that offer a wide variety of educational choices—such as magnet schools, vocational programs, and programs for students with limited English proficiency—may receive federal assistance from a large number of sources and be subject to additional program requirements. Other (often smaller) districts may be involved with the federal government through a smaller number of more broadly targeted programs and requirements, such as those that provide funding for teacher training. Although there has been widespread agreement on the need to improve the educational system, school districts are central to some education reform efforts and more peripheral to others. Certain education reform movements, like charter schools, have minimized the role of the school district. In contrast, in other reform efforts the district plays a central role in improving curriculum, instructional methods, student assessment, and professional development. Many proponents of all varieties of education reform—regardless of their view of school districts—regard flexibility as a key element in efforts to improve teaching and learning. However, little information is currently available about what types of flexibility are thought to be needed and how federal flexibility initiatives have been used. U.S. school districts vary greatly in size, from rural districts with only one school to citywide systems encompassing hundreds of schools and hundreds of thousands of students. In enrollment, school districts range from some with only a few students to New York City with over 1 million. As shown in figure 1.1, while only a few districts had enrollments of over 100,000 students in school year 1995-96, a much larger number of districts reported serving fewer than 150 students. These small districts, although numerous, accounted for less than one-half of 1 percent of total student enrollment. Some districts (usually smaller ones) served only younger children or only secondary students. Although about 74 percent of school districts provided instruction from the beginning of school through 12th grade, 22 percent of school districts provide instruction only through grade 8, and the remaining 4 percent have a low grade of 7 or higher and a high grade of 12. Districts with more than 100,000 students accounted for about 12 percent of student enrollment but made up less than 1 percent of all school districts. Similarly, while more school districts were located in rural areas, urban districts served a greater proportion of students. Differences in the composition of the student population are sometimes reflected in the specialized programs found in many districts and schools. In 1993-94 (the most recent school year for which data are available), 43 percent of public schools provided English as a Second Language (ESL) programs, and 18 percent of public schools provided bilingual programs, for students with limited English proficiency. Many districts offer vocational-technical programs, which provide skill training in specialized areas as well as academic instruction. Some districts offer “magnet” programs, which focus on a special subject theme. Some districts have established alternative schools; in 1993-94 there were approximately 2,600 of these schools in the country. Districts that have a wide variety of specialized programs may receive federal assistance from separate funding streams that target these specific areas. For example, in 1997, 64 districts that were implementing desegregation plans received federal funds for magnet school programs, and, in 1996, 509 school districts received federal funds to support programs for children with limited English proficiency. As a result, larger districts and districts with a wider variety of programs and populations may receive federal assistance from a larger number of sources and also be subject to a greater number of federal program requirements. One large urban district we visited received a total of 27 federal grants, from agencies as diverse as the Department of Education, the Department of Housing and Urban Development, and the Department of Agriculture. Many of these programs targeted specific areas or specific groups of students, such as students with limited English proficiency, neglected and delinquent youth, and Native Americans. Another, smaller district received only four federal grants, all from the Department of Education and all targeted fairly broadly. Many Americans see the nation’s public elementary and secondary schools as average at best. With American students’ achievement in mathematics and science lagging behind that of their peers in other industrial nations, dissatisfaction with the educational system has fueled calls for widespread systematic reform. Various education reform efforts have adopted differing approaches toward the role of the school district. Some initiatives view school district organizations as part of the problem, while others are designed to rely strongly on district leadership. Two education reform strategies—charter schools and school-based management—have attempted to expand the role of principals and other school administrators, reducing or even eliminating the role of the school district in making key decisions on educational programs. Charter schools are schools formed by parents, teachers, and/or community members who collectively determine the school’s structure, mission, and curricular focus. Charter school laws essentially allow entities other than school districts to start and operate public schools. Charter schools therefore are generally not required to follow all policies, procedures, and requirements of the local school district. In addition, although they receive public funds and must comply with federal requirements, charter schools are generally designed to operate with more autonomy from state and local regulations. Charter schools are responsible for meeting the terms of their charters, however, and these charters may include specific educational outcomes. Proponents of charter schools believe that this freedom from district-level and state-level requirements will lead to better academic outcomes both at charter schools and at the surrounding district schools. Another type of reform initiative—school-based management—has also focused on freeing building-level administrators from some of the restrictions imposed by district-level management. Initiatives in school-based management have become common, particularly in light of perceptions that district bureaucracies and school boards are unresponsive and impose restrictive requirements that hinder the ability of individual schools to meet their unique needs. Under school-based management, the school district typically delegates some control over decisionmaking on budgets, personnel, and/or instructional programs to school administrators, teachers, parents, or other members of the community. For example, school-based management could allow individual schools to choose to offer either half-day or full-day kindergarten, instead of following a uniform policy that was decided at the district level. Similarly, school-based management could allow individual schools to hire fewer staff and buy more computers (or vice versa), rather than have those decisions made by the district office. Proponents of school-based management believe that allowing the people most closely associated with children to make decisions about a school will make the school more responsive to children’s needs. Although charter schools and school-based management primarily focus on administration at the school level, other reform efforts involve changes in curriculum, instructional strategies, professional development, and student assessment that are implemented on a districtwide basis. For example, one school district we visited established new standards, curricula, and assessments at the district level aimed at increasing accountability for student learning. Similarly, in our 1993 report on education reform efforts, we reviewed another school district that had developed its own school improvement model, which was subsequently adopted by other districts. Another district in this study had adopted a policy of testing students frequently and evaluating teachers on the basis of student performance on certain tests that related specifically to the district’s standard curriculum. Whether reforms are initiated outside of the school district (as in charter schools) or at the school or district level, many proponents of education reform believe that efforts to improve teaching and learning will be more successful if local school districts have more flexibility to adapt federal programs to local needs. The National Governors Association, along with several major education associations, has advocated increased flexibility for school districts, including a loosening of federal and state requirements that are thought to potentially impede new or innovative reform approaches. Over the past 5 years, the Congress has enacted several provisions designed to provide schools and districts with more flexibility in how they use federal funds. In addition, for certain areas the Congress has given the Department of Education the authority to grant waivers—temporary exceptions to certain federal requirements—to states and school districts. While some experts have welcomed these provisions, other observers have urged caution. Because many federal and state restrictions were established to protect students, they fear that important social purposes—such as protecting civil rights—may be compromised if federal restrictions are loosened or lifted. Despite the importance of this debate, little information has been available about the issues school districts face in implementing various federal requirements, what flexibility they would find most useful, and how existing flexibility has been used. The Chairman of the House Committee on Education and the Workforce, the Chairman of the House Committee on the Budget, and the Chairman of the House Subcommittee on Human Resources, Committee on Government Reform and Oversight, asked us to report on how federal requirements affect local school districts. Specifically, the objectives of this study were to describe the major federal requirements that affect school districts; describe the issues that local school districts face in implementing these analyze the impact of the Department of Education’s flexibility initiatives on school districts’ ability to address these implementation issues. Our approach relied on data from a variety of sources. We interviewed officials from 87 school districts using a variety of methods—telephone interviews, group interviews, and site visits. We also interviewed representatives from 15 major education associations and federal and state program officials. We surveyed officials from all 50 states to obtain information on the use of financial flexibility mechanisms by states and local school districts. We reviewed the education finance literature and analyzed federal laws and regulations applying to school districts. In addition, we analyzed data from the Department of Education on school district characteristics and the use of federal waivers. We did not verify the data we obtained from the Department of Education. We focused our review on 36 federal programs or mandates that education experts, school district staff, state and federal officials, and the literature identified as having a major impact on school districts. We reviewed the relevant legislation (and, in some cases, the regulations and/or agency guidance) to obtain descriptive information about the programs or mandates. These 36 requirements were defined according to their impact on school districts, not necessarily by law or by program. For example, the Americans With Disabilities Act (ADA) contains provisions regarding the accessibility of public buildings and also regulates the employment protections extended to persons with disabilities. These provisions clearly have separate and distinct implications for school districts, although they are contained in the same piece of legislation. Therefore, we elected to treat these requirements separately. Similarly, the Elementary and Secondary Education Act (ESEA) and the Improving America’s Schools Act (IASA), which reauthorized ESEA in 1994, deal with many programs in a single legislative act. In other cases, multiple pieces of legislation may provide a vehicle for very similar requirements. For example, the Asbestos School Hazard Abatement Act and the Asbestos Hazard Emergency Response Act (AHERA) established regulations for the management of asbestos in schools. Because both laws affect how schools and districts manage asbestos, we considered these requirements together. Although our list of major requirements is not comprehensive, it does capture the requirements that education experts and district officials viewed as having a significant impact on school districts’ administration and operations. We used several methods to gather information on school districts’ views of the issues that affect their implementation of federal requirements. Early in our study, we conducted two group interviews of school district personnel at major education conferences. The district officials who participated in these group interviews represented 15 states from across the country. From these group interviews, and from our interviews with associations and federal officials, we learned that state requirements—and differing state interpretations of federal requirements—can play a crucial role in the implementation of federal laws and regulations. Because school district officials could not, in general, distinguish between state and federal requirements, obtaining the views of district officials nationwide would be problematic because district staff in different states would be responding to different sets of requirements. For this reason, we conducted the majority of our interviews with school district officials in a few states. As a result, when we discussed particular requirements with officials from different districts, we could adequately account for variation across states, although we cannot generalize our results to all states. We selected three states—Pennsylvania, Massachusetts, and Louisiana—as the major focus of our study. We chose these states because each had a diverse student population in terms of income, disability status, and urban and rural areas, and because they differed in other characteristics, including the mix of state and local funding for education, the relative amount of state funding provided to poorer and wealthier districts, the number of federal waivers granted to districts in the state, and whether the state had been designated as an “Ed-Flex” state. In each of these three selected states, we obtained detailed information from site visits and from a telephone survey of school district superintendents and other officials. We visited two districts (one relatively large and one relatively small) in each of the three states. (Characteristics of the districts we visited are shown in table 1.1.) The districts we visited ranged from a large inner-city district with 257 schools and over 200,000 students to a rural district with 2 adjacent schools and an enrollment of just over 1,000. We selected these districts primarily on the basis of enrollment size, geographic location, and urban/rural mix; where more than one district met our requirements we made a random selection from these districts. During our site visits, we interviewed the district’s superintendent; the food service director; the assistant superintendent, business manager, or facilities manager; the Title I and special education directors; and directors of other programs (such as vocational education) where applicable. We also visited state officials with responsibility for special education, Title I, and other major programs in each state. In addition, we conducted a telephone survey with officials from school districts in each of these three states. We selected a random sample of school districts in each state, stratified by size. We drew this sample from the Department of Education’s Common Core of Data database, which contains information on the approximately 15,000 school districts in the United States as reported by states and school districts for the 1993-94 school year. In drawing the sample, we eliminated districts that reported no schools for the 1993-94 school year. We verified our data through the current school district listing provided on each state’s Internet site. We eliminated a few districts from the sample because they were no longer operating or had already participated in a site visit or group interview. For each of the 83 districts selected, we sent a letter to the district superintendent asking the superintendent or assistant superintendent to participate in a 1-hour telephone interview with us regarding the implementation of federal requirements. Superintendents were invited to include key staff members in the interview (many of them did) or to solicit comments from staff prior to the interview. A total of 59 school districts (71 percent) participated in the survey. In each state, at least 5 percent of the districts in the state participated in the survey. However, due to the small number of total participants and the qualitative nature of many of the questions, the survey was not designed to enable us to project quantitative estimates at the state level. In these interviews, we asked districts about the information and technical assistance they received on federal requirements, the eligibility determination process, federal funding, application processes, accounting and reporting requirements, and other areas. All data were self-reported. Our group interviews, site visits to states and school districts, and telephone survey results also provided important information on how federal flexibility efforts have affected local school districts. We obtained additional information on waivers of state requirements, consolidated planning and reporting initiatives, and financial flexibility mechanisms by surveying education officials in the 50 states. In this survey, we asked state officials about the number and types of waivers granted for state requirements, the extent to which school districts in their state submitted consolidated applications and reports, and the extent to which school districts in their state used certain financial flexibility mechanisms. In addition, we reviewed the Department of Education’s data on waivers and interviewed federal and state officials to discuss their views on flexibility initiatives. We also reviewed the legislation, regulations, and guidance associated with these efforts. Our work was done between September 1997 and August 1998 in accordance with generally accepted government auditing standards. School districts serve their communities in several key roles—not only as educators but also as food service providers, employers, and managers of public facilities. In each of these roles, the school district is faced with federal requirements designed to ensure equal educational opportunity, protect the integrity of federal funds, improve quality in key educational areas, and ensure students’ and employees’ safety and health. Many of these requirements are accompanied by federal dollars, although federal funding seldom provides complete support. States play a key role in administering federal programs and also impose their own restrictions on school district activities. School districts must comply with federal requirements in several areas, including not only education but also environmental, employment, and food services. This range of programs and mandates reflects a variety of purposes and objectives. Federal programs and mandates are designed to ensure equal educational opportunity, improve educational quality (especially in certain targeted areas), guard against safety and health hazards, and protect financial integrity. Two of these goals—equal educational opportunity and improving quality—concern how the district provides instruction. Many federal education programs (including some of the largest federal efforts) are intended to ensure equal educational opportunity for children with various types of disadvantages. For example, the federal government funds programs specifically targeted to children with disabilities, poor children, homeless children, and children with limited English proficiency. Other federal education programs are directed not at particular children but at particular topics or subject areas that are thought to have special national or economic importance. Federal teacher training programs, for example, give priority to math and science instruction. Through targeted programs, the federal government also earmarks funds for vocational education and for integrating technology into the classroom. Other federal programs and requirements are designed to indirectly facilitate high-quality education by ensuring that students and teachers work and learn in a safe environment. A number of federal requirements, including environmental mandates and nutrition requirements for school meals, aim to ensure or improve students’ safety and health. In addition, those programs that distribute federal dollars carry a concern with ensuring the integrity of those funds. Documentation, spending, and auditing requirements address these concerns. Table 2.1 summarizes the types of objectives for federal programs or mandates and provides examples in each category. Many people think of school districts only as educators, because teaching children is their fundamental mission. However, in addition to their primary function as educators, school districts also serve in other roles, many of which are resource-intensive and of great importance to the community. For example, in addition to operating classrooms, schools operate restaurants—most serving lunch and many serving breakfast. In one rural school district we visited, the single school cafeteria served lunch to about 1,000 students each day—probably more than any other restaurant in the local area. School districts are also employers of teachers, aides, administrators, and custodians. School districts manage one or more public buildings, which may be used by the community for voting, adult education, or recreational activities. In each of these roles the school district is subject to a variety of federal requirements. As educators, districts receive funding from the federal government and in return must follow program requirements. The largest federal education programs provide financial assistance to many school districts, although the programs target specific student populations. For example, in school year 1997-98 about 89 percent of the school districts in the United States received funding from the Title I program, which helps school districts finance programs to assist disadvantaged students, particularly in reading and math. Along with financial assistance, federal programs come with requirements concerning which students or what subject areas are to be targeted, what records must be kept, and how school districts are allowed to spend federal dollars. Title I requirements specify a formula for how funds must be distributed to schools within a district. Similarly, Perkins Act programs, which support vocational education, require school districts to give priority in allocating funds to sites or programs that have higher concentrations of students with disabilities, economically or educationally disadvantaged students, and students with limited English proficiency. Federal regulations also require school districts (and other recipients of federal funds) to keep records of equipment purchased with federal funds and to submit to an annual financial audit in accordance with the Single Audit Act. The different education programs vary in the extent to which they prescribe and restrict school districts’ use of federal funds. For example, the Safe and Drug Free Schools program is often considered a flexible program; within the broad guidelines established by the statute, school districts are free to develop their own programs. In contrast, the Individuals With Disabilities Education Act (IDEA) specifies several procedures school districts must follow in providing educational services to children with disabilities. Under IDEA, districts must assess a student’s need for additional services; for each student, create an Individualized Education Program (IEP) that details the support services the student will receive; offer services in accordance with the IEP; review each child’s IEP annually and revise it as appropriate; and reevaluate the child’s need for special education services as appropriate, but at least once every 3 years. Other federal requirements also affect how school districts provide educational services, even though they are not associated with any specific federal program. For example, even if a school district does not receive funds under federal bilingual education programs, the district is still required by federal civil rights law to provide meaningful access to education for students with limited English proficiency. School districts also receive federal assistance in their role as food service providers. In fiscal year 1997, nearly 94,000 schools—including almost 99 percent of public schools—chose to participate in the National School Lunch program, serving an average of more than 26 million lunches daily. Nearly 68,000 schools participated in the National School Breakfast program, serving an average of 6.9 million breakfasts every day in fiscal year 1997. Under these federally funded child nutrition programs, school districts receive cash assistance based on the number of meals they serve and the number of low-income children who are served free or reduced-price meals. Schools also receive additional federal support in the form of agricultural commodities such as meats, fruits and vegetables, and dairy products. About 17 percent of the total dollar value of the food served in the school lunch program is provided through commodities. In return for this federal support, schools must provide free and reduced-price meals to children from low-income families and ensure that the meals meet federal nutrition standards. Children from families with incomes at or below 130 percent of the poverty level are eligible for free meals. Children from families with incomes between 130 and 185 percent of the poverty level are eligible for reduced-price meals. In addition, as of school year 1996-97, schools must serve meals which meet several nutrition requirements established in the 1990 Dietary Guidelines for Americans, including limiting total fat to 30 percent of calories and limiting saturated fat to less than 10 percent of calories. School lunches must also provide at least one-third of the Recommended Dietary Allowances of protein, calcium, iron, vitamin A, and vitamin C; school breakfasts must provide at least one-quarter of these levels. School districts are generally subject to the same workplace regulations as other employers. For example, antidiscrimination laws generally apply to school districts as well as to private businesses. Like other employers, school districts are generally prohibited from discriminating against employees because of race, color, religion, sex, or national origin by Title VII of the Civil Rights Act; similarly, the Age Discrimination in Employment Act prohibits discrimination against workers aged 40 and over. In addition, under the Americans With Disabilities Act (ADA) and section 504 of the Rehabilitation Act, school districts are prohibited from discriminating on the basis of disability and required to provide reasonable accommodation to an employee with a disability. Other worker protection legislation also applies to teachers and other school district workers. Although school districts are specifically exempt from the Occupational Safety and Health Act, some states require school districts to adhere to certain workplace safety standards. In addition, school districts are generally required to provide unpaid leave under the Family and Medical Leave Act, and to follow the minimum wage, child labor, and overtime provisions of the Fair Labor Standards Act. In addition to these federal requirements, many school districts are governed by collective bargaining agreements that may also establish policies affecting compensation, overtime, and workplace conditions. In school year 1993-94, an estimated 64 percent of all public school districts had a collective bargaining agreement with a teachers’ union or organization. As managers of public facilities, school districts are responsible for ensuring that these facilities are accessible to people with disabilities. Under ADA and section 504 of the Rehabilitation Act, school districts face accessibility requirements that differ for new and existing buildings. For existing buildings, school districts must operate their programs so that, when viewed in their entirety, the programs are accessible to individuals with disabilities. The law does not require a school district to retrofit each of its existing facilities to make them fully accessible to individuals with disabilities. However, a more stringent standard applies to new construction and to certain renovations of existing facilities; these buildings must be readily accessible and useable by individuals with disabilities and must comply with design standards. School districts also must comply with certain environmental standards where they are applicable. For example, AHERA required school districts to inspect schools for asbestos, to draw up an asbestos management plan that identifies where asbestos is located in the schools, and to reinspect schools every 3 years to ensure that asbestos materials have not become damaged. Similarly, to protect groundwater from contamination, school districts that operate underground storage tanks (UST) must comply with federal and state safety requirements. Certain USTs are required to meet EPA requirements for spill protection and corrosion prevention; owners of affected USTs must upgrade their tanks to meet these standards by December 22, 1998. If a UST is found to have a leak, the owner may also be required to take action to prevent further contamination of the soil. Additional requirements may govern school districts’ disposal of hazardous materials (for example, chemicals from a high school science lab). Other environmental requirements may also apply to school districts. For example, one district we visited found that the well used by one school violated EPA standards, and a new well was dug to replace it. Of the 36 major federal programs or legislative mandates in our review,over half carry some federal funding. Programs that directly support instructional activity (such as Title I and Safe and Drug Free Schools) carry some federal dollars, as do the child nutrition programs that support school food service programs. Programs and requirements less directly related to the educational role of the school district, however, are less likely to provide direct financial assistance. Employment-related requirements, for example, do not provide financial support; and environmental requirements generally come without financial assistance. Indirect federal support—especially in the form of information and technical assistance—is often provided for many federal programs and mandates, whether or not direct financial assistance is also provided. For example, EPA has published documents to provide information on UST requirements, and the federal Department of Education provides support for technical assistance, mainly through state agencies. For many major programs, federal financial contributions do not fully fund the activities these programs support. Federal dollars account for a relatively low share of total education spending (about 7 percent in school year 1995-96), while state and local funds account for about 47 and 46 percent, respectively. Although reliable information on local expenditures for specific program areas is scarce, the available figures show similar results. For example, the Department of Education has estimated that in the early 1990s, in 24 states, about $13.9 billion was spent annually to provide services to children with disabilities under IDEA, yet federal funds accounted for only 7 percent of these costs. In our 1993-94 survey of school districts, the average amount of federal funding school districts reported receiving for vocational education equaled only 11 percent of the average amount of funding districts reported receiving from state and local sources. Similarly, one of the large districts we visited received $695,242 in federal bilingual education funds but budgeted about $30 million for bilingual education instruction. Food service programs appear to be an exception to this overall pattern, being fully or nearly fully funded by federal dollars; a research study found that in school year 1992-93, the federal reimbursement rate for a free lunch under the National School Lunch program was approximately equal to the median cost of producing a school lunch. Although it is clear that the cost of many education activities exceeds the overall federal contribution, the precise size of this gap is difficult to determine for specific areas or requirements. Little information is available on the true cost of many education and education-related activities that are supported with federal funds. Even at the level of the local school district, it is usually difficult to determine exactly how much has been spent on different educational activities such as “regular” classroom instruction, special education, dropout prevention, assistance to students with limited English proficiency, and so forth. Some of the districts we visited set up their budgets to provide such program-specific information, but others did not. When districts do generate their budgets on a program-specific basis, their definitions and methods of classifying expenses may be inconsistent with those of other districts, making comparisons across districts often difficult and sometimes impossible. These difficulties are further complicated by the wide variation in per pupil spending across school districts. For example, for the six districts we visited, the highest-spending district spent over twice as much per student ($7,804.06) as the lowest-spending district ($3,801.50). Factors such as district size, geographic differences in salaries and other expenses, the age and condition of school facilities, and the composition of the student body (such as the number of students with disabilities or with limited English proficiency) can contribute to such differences and make it difficult to say what level of expenditure is adequate or appropriate for a particular program. Several programs provide funding directly from the federal government to school districts. These programs include Impact Aid, which provides general financial assistance to school districts adversely affected by federal property or by large numbers of federally connected children, and Head Start, which provides a broad array of educational and social services to low-income children through local agencies (some of which are school districts). However, 17 of the 23 programs we reviewed that provide federal funding distribute these dollars through the states. The largest federal education programs—Title I and IDEA, which provided $8 billion and $4.8 billion in fiscal year 1998, respectively—distribute their funds through state education agencies. The role played by the state agency differs substantially across various federal programs. For example, under the Adult Education program, states have considerable discretion in distributing federal funds because each state can determine the criteria it will use to award competitive grants. In contrast, Eisenhower Professional Development program funds (which finance teacher training) are merely passed through the state, with district allocations already determined by the formula set out in the federal statute. The role of the state in program administration also varies. In the National School Lunch program, the state plays a key role in selecting and distributing federal commodities. For many programs, states play a key oversight role as well. Under IDEA, for example, the states assume a major part of the responsibility for ensuring that school districts comply with the law’s requirements. For most of the programs we reviewed that provide federal funds, school districts must submit plans or applications to either the state or the federal government. These plans or applications generally contain information on how the funds will be used, certifications that federally prescribed procedures will be followed, and assurances that federal funds will be expended in accordance with the purpose of the program. For certain programs in some states, school districts must also request and receive reimbursement from the state, rather than receiving grant funds up front. In fulfilling their role as administrators of federal programs, state governments sometimes place additional requirements on school districts. For example, federal requirements allow districts to purchase equipment costing under $5,000 without separate documentation; however, in one state we visited, a lower threshold of $500 had been set by the state government. In cases where a state requirement arises from the implementation of a federal program or regulation, it becomes especially hard to distinguish a state requirement from a federal one. From the point of view of the local school district, it may not be important where the requirement originated because the district must comply in any case. Staff in most of the school districts we visited told us that they could not tell or did not know which requirements were state and which were federal, and education experts told us that this was probably true of most districts nationwide. States have also imposed many requirements on educational programs in areas unregulated by the federal government, such as curriculum and teacher certification. For example, by 1996, 44 states had set minimum curriculum requirements for high school graduation, 43 states required districts to offer a half-day or full-day kindergarten, and 46 states established professional development requirements or continuing education requirements for teachers. States also specify a required number of days or hours for the school year. In addition to federal and state requirements, school districts are also affected by requirements imposed by local governments and by the courts. Local requirements such as building codes can affect school district operations. Some school districts are also affected by judicial decisions. For example, one district we visited had been required by a court order to fund several programs as a result of a long-standing desegregation lawsuit. In the area of special education, judicial decisions can affect what services the district provides and which students receive them. School district officials generally expressed support for federal initiatives, recognizing the importance of such goals as ensuring equal educational opportunity and protecting children’s health and safety. At the same time they noted their concerns with implementation issues that make achieving these goals more difficult. Rather than focusing on a single federal program or requirement, these concerns extend to a wide variety of implementation issues that affect all phases of program and service delivery. School districts’ key implementation issues include: (1) the lack of adequate information on federal requirements and federal funding, which can make school districts less efficient and less innovative in implementing federal requirements; (2) program and facilities costs associated with federal requirements, as well as the administrative costs associated with federal programs; and (3) the logistical and management challenges presented by certain federal requirements, which can make it difficult to meet federally prescribed timelines and to find the qualified staff or providers to successfully implement federal requirements. In confronting this wide variety of implementation issues, school district officials expressed a desire for more information, additional funding, and greater procedural flexibility. School district staff need extensive information about federal requirements and funding allocations. To do their jobs well, district administrators need to know the requirements associated with the various programs, as well as the more broadly applicable environmental and employment regulations. Although state agencies provide technical assistance, district officials reported crucial information gaps. The number and complexity of federal requirements, combined with the challenges posed by staff turnover, make keeping up with the requirements a challenge for both district and state staff. Without sufficient information about federal requirements and funding, school districts may spend funds unnecessarily, lose opportunities to structure programs to meet local needs, and face uncertainty that limits their ability to conduct financial planning. School district officials need to have detailed knowledge of federal requirements in order to design educational programs in compliance with federal laws and regulations and to conduct long- and short-term financial planning. However, education experts, school district staff, and state officials agreed that districts often have incomplete information about federal requirements. Because district officials must comply with numerous federal laws and regulations in a variety of complex areas—such as special education, nutrition standards for school meals, and environmental requirements—maintaining detailed knowledge can be difficult. District and state officials told us that the large number of federal laws and regulations often makes it hard to keep informed, especially as requirements and personnel change. In addition, the complexity of certain federal requirements can prove to be a challenge to district program directors. For example, one special education director told us that “you need a law degree and an MBA to understand the special ed regulations.” Although the Department of Education also provides technical assistance, Department officials told us that it is primarily the states that face the challenging task of keeping school districts informed. Our telephone interviews confirmed that states are the school districts’ primary source of information and technical assistance; when we asked district officials whom they called first when they had a question on the Title I or IDEA programs, about 80 percent said that they contacted their state Department of Education. Staff from 88 percent of the districts we interviewed by telephone said that the assistance they received from the state Department of Education was “helpful” or “very helpful.” However, school district officials still faced information gaps that may limit their ability to implement innovative and cost-effective education and support programs. For example, one program director we interviewed told us that her contact at the state was prompt and accurate in responding to questions but did not move proactively to provide information, leaving her unaware of key regulatory provisions and of potential grant opportunities. Another program director expressed a similar concern, and said that it was especially difficult to keep up with changes in the law without added clarification from state or federal officials. State and federal agencies face several challenges in using technical assistance to address districts’ need for additional information. For example, turnover of key administrative personnel at both the state and the district level can have a negative impact on the effectiveness of technical assistance. A survey conducted by the Council of the Great City Schools of 49 large urban school districts found that the average tenure of district superintendents in these large districts was less than 3 years. Several education experts and district officials told us that states also experience personnel turnover, and as a result some states may face shortages of knowledgeable staff to provide technical assistance. In addition, federal and state officials told us that they sometimes find their efforts to use information technology (such as the Internet and e-mail) frustrated by a lack of such technology at the local level. School districts’ lack of knowledge may have a major effect not only on their ability to administer federal programs, but also on their ability to implement local initiatives to improve teaching and learning. School district officials need to know what is required of them—both financially and programmatically—and what assistance they will receive. Without sufficient information on federal requirements and funding, districts may spend funds unnecessarily, lose opportunities to structure programs as they desire, and face uncertainty that limits their financial planning. Misunderstandings about the scope of requirements may lead school districts to spend more money than necessary in complying with federal requirements, particularly in environmental areas. As a result, districts may lose the opportunity to use these funds for other programs designed to achieve key local objectives. For example, the superintendent in one district we visited told us that district officials did not know what to do about asbestos in the schools; in retrospect he believed that the district might have been able to save some money if they had had more detailed knowledge about the asbestos requirements at the time. An official from another district told us that officials had not fully understood all the requirements they had to follow when renovating their gymnasium; if they had known about all the regulations before issuing the bond to pay for the renovation, they would have tried to raise more money, she added. Similarly, an EPA official told us that a lack of knowledge may lead some districts to spend more than necessary to comply with the requirements on USTs. With incomplete information, district officials may interpret federal requirements in very conservative and narrow ways, believing they have less flexibility than they actually do. Limited knowledge may lead some district officials to mistake long-standing practice for legal requirement, making them more reluctant to adopt new educational initiatives. As a result, districts may lose the opportunity to structure programs as they would like. One district program coordinator told us that when she is not absolutely sure of the requirements, she tends to be cautious “even if somebody at the Department of Education told me it’s OK.” Similarly, a state official said that “lots of mandates are perceived, not actual” because often local school districts do not understand what is required or how much flexibility they actually have. In addition to needing information on what is required, district staff also need to know how to use available flexibility mechanisms (such as waivers) to assist them in improving educational programs. According to district staff, federal officials, and other education experts, some districts are not fully aware of the flexibility provisions available to them. Moreover, in 1997 the Department of Education’s Inspector General reported that many districts had insufficient information to take advantage of flexibility provisions such as waivers and consolidated planning. The Inspector General’s results are consistent with some districts’ responses to our questions. For example, one district superintendent responded to our question about federal waivers by saying, “I just never thought it was possible.” Finally, even when district staff have a good understanding of federal requirements, they also need accurate and timely information on the funding they will receive. District superintendents and program directors expressed frustration with the lack of timely information on federal funding allocations. According to these officials, by the time the Congress appropriates the funds, the federal agencies allocate the money to the states, the states allocate money to the districts, and the funds are made available to the district, the district staff have only a brief window of opportunity to plan their programs and make their purchases. For example, in one district we visited, the district budget that was distributed to school and central office staff carried a warning that changes were possible because information on the coming year’s federal allocations was not available. An official from this district told us that some federal grant funds are sometimes received very late, causing the district to cut purchases of textbooks and other supplies. Several district-level program directors we interviewed advocated multiyear funding as a way of reducing the uncertainty of the funding process. School district officials generally agreed with the purposes and goals associated with many federal programs and requirements, including special education and environmental requirements. However, they also expressed concern about both the program costs and the administrative costs of implementing federal laws and regulations. Program costs in areas such as special education, environmental requirements, accessibility, and nutrition standards greatly exceed federal assistance, according to school district officials. In addition, district staff told us that the eligibility determination processes and accounting and reporting requirements associated with federal programs can contribute to a heavy administrative load. To a lesser extent, some district officials viewed federal restrictions on raising or spending funds as an issue. Many school district officials we interviewed expressed their support for the purposes underlying certain federal requirements. This widespread support extended to all types of program objectives, including equal educational opportunity and improving instructional quality as well as safety and financial integrity. District staff’s support for federal objectives included not only programs with substantial funding but also requirements where funding is not provided. Special education directors told us that students with disabilities had benefited a great deal from special education. Officials from several districts also said that they believe restrictions on how districts spend federal funds were appropriate and necessary to prevent fraud and abuse. One district official, explaining why he believes targeting and spending restrictions are necessary, told us, “When $11 billion is left on a tree stump I know what happens.” In another district, the facilities manager told us that various environmental requirements (such as those related to asbestos and other chemical hazards) were necessary to protect health and safety. Some district staff also agreed with the goal of promoting better nutrition in school meals. Although district officials generally agreed with the need to provide a quality education to children with disabilities, they also expressed concern about the cost of providing these services, especially in the context of limited federal support. District superintendents and special education directors identified a variety of factors as major contributors to the higher costs of special education: (1) the large number of students who require special education, (2) a few students whose very severe conditions require extensive care and support, (3) a lack of assistance from other parties (such as insurers and other public agencies) in providing related services, and (4) the costs of litigation and procedural issues. These same elements are frequently mentioned in the special education literature, although research to measure the impact of each of these factors has not yet been conducted. First, in many districts a large number of students require special education and related services. In the 50 states and the District of Columbia, about 4.8 million students aged 6 to 17 were served under IDEA in the 1995-96 school year. This amounts to approximately 10.6 percent of all students in that age group. However, this percentage can vary considerably across school districts and across states. For example, across states the percentage of students aged 6 to 17 served under IDEA ranged from 7.6 percent in Hawaii to 14.85 percent in Massachusetts. In one district we visited, over 20 percent of students were receiving special education. Second, although many students with disabilities are fully integrated into regular classrooms and require little additional support, a few children with severe disabilities require more extensive—and more expensive—support and care. For example, in one district we visited, the special education director told us that the annual cost of caring for two autistic children in the district amounted to approximately $150,000, and four other students with psychiatric disorders were being served at a cost of about $38,000 each. State officials and staff from other districts also pointed to similar high-cost cases as an important factor in the cost of special education. Third, some district staff told us—and state officials confirmed—that districts sometimes had difficulty obtaining financial assistance from public agencies and private insurers for related services (especially health services) they provided to a student with a disability. Under IDEA, the school district is obligated to provide a “free appropriate public education” to any student with a disability, including both special education and “related services.” “Related services” are defined under IDEA as services that may be required to help a child with a disability benefit from special education, including transportation, speech-language pathology and audiology services, physical and occupational therapy, social work, counseling, and medical services. Similarly, assistive technology (such as special computer software, a plastic device to assist in holding a pencil, or other items) may be needed to help the student participate in school. Because many related services are health-related or medical in nature, they may be covered under private health insurance policies or under Medicaid, the government insurance program that provides health care to poor families. In addition, other related services (such as counseling or mental health care) may fall within the purview of other state agencies such as the Department of Mental Health or Social Services. However, some school district officials reported that they had difficulty obtaining assistance or reimbursement from these other sources. Even when assistance could be obtained, we were told, it was ofetn insufficient to meet the costs. One district official told us that obtaining funds from Medicaid and other health insurers was “an unbelievable nightmare.” In its 1997 reauthorization of IDEA, the Congress specified that other public agencies should provide services within their purviews and required states to establish an interagency agreement or other mechanism to establish which agency is financially responsible for which services and to otherwise coordinate between agencies. However, it is too soon to determine the impact of this provision on local school districts. Finally, school district and state officials identified dispute resolution—especially litigation—as a contributing factor to special education costs. According to district officials, the possibility of litigation not only creates legal costs, but also can make school districts more cautious and less innovative in dealing with special education issues. As one district official told us, “You always call the lawyer first on any special education-related issue.” The superintendent in another district said that a single due process hearing could cost his district around $8,000 to $10,000 in legal fees and salary costs, regardless of the outcome. The special education director in that district also expressed concern about legal costs. He added that because of the high cost of litigation, he does not believe that he can refuse parents’ requests, even when he believes they are unreasonable. Despite these anecdotal reports of high costs, however, disability advocates often view litigation as necessary to prod some school districts into providing necessary and required services. One disability advocate we interviewed said that some school districts simply will not provide services “until they are called on it.” District staff expressed support for environmental requirements designed to ensure the safety of students and staff; however, some also worried about the cost of making these needed improvements. When discussing funding issues, many district officials mentioned the need to abate or remove asbestos when renovating, remodeling, or repairing school buildings. Asbestos abatement can be costly, even in the context of a remodeling project. For example, when one district we visited remodeled two buildings in 1991, asbestos removal cost the district a total of $174,376. In another district we visited, district staff told us that they had to postpone repairs to one school’s roof because they could not afford the cost of removing the current roof, which contains asbestos. In our 1994 survey on school facilities, schools reported having spent an average of $43,000 on asbestos in the previous 3 years; furthermore, schools reported needing to spend an average of $71,000 over the next 3 years on asbestos. Asbestos abatement issues will continue to present school districts with difficult and often expensive financial choices, as more schools are remodeled or modernized. School enrollments are growing; at the same time, many of America’s schools are in poor condition, needing not only repair but additional space to accommodate modern instructional techniques like alternative student assessments. Finally, as we reported in 1995, many schools need to put in place the building infrastructure needed to support information technology, including electrical wiring, conduits/raceways for computer cables, and additional electrical outlets. As a result, many school districts will be faced with asbestos abatement expenses as they prepare to modernize their aging buildings. Incomplete implementation of existing requirements may also compound schools’ difficulties with asbestos. For example, an EPA study estimated that only 16 percent of the original AHERA inspections were “thorough inspections”; the remaining 84 percent of inspections failed to either accurately identify, quantify, or record the location of asbestos-containing material in the school. As a result, a school or district that relied on its AHERA plan to avoid disturbing asbestos might experience asbestos problems in areas it could not or did not anticipate. In addition, federal officials expressed concern that because of lack of communication, turnover in personnel, or other reasons, school or district officials might fail to adequately review their AHERA plan before beginning remodeling work, and as a result they might encounter asbestos-related problems. In some districts, staff also mentioned the difficulty in absorbing the cost of upgrading or removing USTs. Representatives from education associations also cited USTs as an expense that poses problems for some school districts. The cost of upgrading USTs can vary considerably depending on a number of factors, including the condition of the soil, labor costs in the area, type of upgrade, length of downtime, and when the upgrade is done (upgrades done closer to the December 22, 1998, deadline for compliance are expected to be more expensive). EPA has estimated that upgrading a three-tank system may cost from about $13,000 to $20,000, while permanently closing a UST may cost roughly $5,000 to $11,000. Staff in five of the six districts we visited, as well as some of the districts in our phone survey, told us that making buildings accessible for people with disabilities was a major expense. Staff from one urban district told us that the cost of making all facilities fully accessible would be “astronomical,” and it had been “a strain” to find funds to meet the current accessibility requirements (which do not require making all buildings accessible). The business manager in another district told us that not only were the accessibility renovations costly, but that maintaining equipment such as wheelchair lifts was also expensive. Their comments are consistent with the results of our December 1995 study on school accessibility. About 56 percent of the schools we surveyed for that study believed that they would need to spend some money in the coming 3 years (1995-1998) to improve accessibility. In addition, 53 percent of schools in the survey reported having incurred expenditures in the previous 3 years (1991-1994) to improve accessibility. According to the survey results, schools across the country could have been expected to spend about $5.2 billion on accessibility in the 1995 to 1998 period. Like the district staff we interviewed, school officials in that study reported that many schools were not made accessible because of a lack of funding. Despite their general support for the goal of improving the nutritional value of school meals, some district staff also told us that they believe the new nutrition standards have increased or will increase their costs. According to a study conducted by the Department of Agriculture, the nutrition standards can be implemented without increasing the cost of the meals. However, several food service directors we interviewed disagreed. They told us that training staff in new methods of food preparation has been time-consuming and challenging. Additionally, we were told that some items that used to be made in-house (like salad dressing) were replaced with more expensive commercial versions to meet the new federal standards that limit the fat content of school meals. The potential magnitude of this issue is still unclear, however, as many districts are still in the relatively early stages of implementing these requirements. Several of the largest federal programs for school districts are targeted to a particular group, such as students with disabilities or students from low-income families. Although this program design allows federal dollars to be directed to those students most in need, it also requires school district staff to determine which students are eligible for assistance. School district staff described eligibility determination under two of these federal programs as particularly challenging, but for very different reasons. District officials viewed eligibility determination for school lunch and breakfast programs as challenging (despite efforts to help streamline the process), mainly because of the volume of paperwork that must be processed within a short period of time. For special education, the challenge in determining eligibility rests with the individualized nature of an eligibility process that often requires detailed reviews by various professionals. A major challenge in administering federally funded food service programs comes at the beginning of the school year, when school and district staff must identify which students are eligible for free and reduced-price meals. In five of the six districts we visited, more than 25 percent of district children are eligible for free and reduced-price meals. Every one of the food service directors in these districts identified eligibility determination as a major challenge. Superintendents and other officials from school districts in our telephone interviews also commented on the difficulty and expense involved in determining eligibility for free and reduced-price meals, although they continued to participate in the child nutrition programs. Students are eligible for free or reduced-price lunches and/or breakfasts on the basis of income and family size. This program structure, while allowing federal dollars to be targeted to those students most in need, requires school district staff to determine which students are eligible for assistance. At the beginning of the year, school districts generally distribute applications to school children and their families. Once applications are returned by the parents, district staff process the applications to determine a child’s eligibility. Some district officials told us that because of pride or privacy concerns, some students and parents (particularly at the high school level) do not return applications even if they know they are eligible. To ensure that only eligible students receive benefits, the school district is also required to verify income information for a sample of applications. Once eligibility determinations have been made, district officials must notify the parents and then incorporate the eligibility information into the district’s food service system. All of this processing takes place at the beginning of the school year, a very busy period for all school staff. Food service staff at the districts we interviewed used two strategies to try to limit the work created by this application process. First, districts used a process called “direct certification” to quickly identify and enroll a portion of eligible students. Under direct certification, students from families that receive food stamps or public assistance can be certified as eligible for free or reduced-price meals. Data from public assistance records is matched with school files, obviating the need for parents to fill out applications and for district staff to process the forms. All of the districts we visited used direct certification, and the food service directors were grateful for the opportunity to use this streamlined process. However, district officials added that while direct certification was helpful, they still faced an administrative challenge in processing applications. Because many families who qualify for free or reduced-price meals do not receive public assistance or food stamps, district officials must gather and process data on a large number of families who cannot be enrolled through direct certification. For example, in one district we visited, district officials could use direct certification to enroll only about 31 percent of eligible students; for the remaining 69 percent, the district had to distribute, collect, and process applications. Second, some districts have also used a more sweeping strategy to reduce the costs associated with eligibility determination—a universal service or no-fee option. Under these options, schools serve lunches and/or breakfasts to all students at no charge, regardless of whether their family income would qualify them for a free meal. Districts are reimbursed based on the number of qualifying students for the year before they began serving no-fee meals. At the end of the 3-to-5-year program period, districts must generally redetermine students’ eligibility to provide a new, up-to-date basis for reimbursement. Under universal or no-fee service, districts reduce the administrative costs associated with determining eligibility and with counting and claiming meals by reimbursement category. However, the cost of the meals increases as the district pays for meals served to students who would not otherwise be eligible for free meals. In addition, with all meals served for free, more students may eat in the cafeteria rather than bring lunch from home, further increasing the cost of the food service program. For any one district, total costs may increase or decrease depending on the strength of these factors. Two of the districts we visited had one or more schools participating in these programs recently, and they reported different experiences. In one district we visited, the no-fee approach was in place at several schools, and district officials told us that universal feeding had lowered their total costs while allowing them to serve more children. In contrast, the food service director in another district reported that they had experimented with universal programs in two schools but discontinued the initiative because the total costs of operating the programs increased at both schools. Eligibility determination for special education can also be a resource-intensive process, according to school district officials. Staff in over half (53 percent) of the districts we surveyed by telephone said that eligibility determination for special education posed challenges for the district. According to district officials, the individualized nature of determining eligibility for special education allows students with disabilities to have educational programs tailored to their specific needs. However, precisely because eligibility and program decisions must consider each child’s unique situation, it is difficult for school districts to streamline the process. Under IDEA, a child is eligible for special education services if he or she is “a child with a disability”—that is, a child who needs special education and related services because of mental retardation, hearing impairments, speech or language impairments, visual impairments, emotional disturbance, orthopedic impairments, autism, traumatic brain injury, learning disabilities, or other health impairments. To help determine eligibility, school districts may obtain expert opinions from various professionals, including doctors, child psychologists, social workers, and others. These professionals may either be on the school district staff or work as independent contractors, but in any case are paid by the school district. If a child’s parents are not satisfied with the school district’s evaluation, they have the legal right to have another evaluation done by independent professionals, at the school district’s expense. Once a child is determined eligible for special education, a decision must be made about what services and supports the child needs. Each child must have an Individualized Education Program (IEP) that describes the child’s educational performance, the goals for the child in the coming year, and the special educational and support services the child will receive to help meet these goals. The IEP is developed—and must be agreed to—by an “IEP team.” By federal law, the IEP team must include the parents of the child; at least one of the child’s regular education teachers, if the child is or may be participating in a regular classroom; at least one special education provider; and a representative of the school district. The parents or district staff may also invite other individuals to participate. States may impose additional requirements on the composition of the IEP team. For example, Massachusetts requires that, for an older student who may need continuing services outside the school system, a representative from an agency that provides adult services be invited to the IEP meetings at least 2 years before the anticipated exit date. The IEP must be reviewed each year and revised as appropriate by the IEP team, and students must be reevaluated for eligibility at least once every 3 years. This highly individualized process allows students with disabilities to have educational programs tailored to their specific and unique needs. However, school district superintendents and special education directors told us that this process comes at a high price in terms of time and money. For example, prior to the 1997 amendments to IDEA, school districts were not required by federal law to have a regular classroom teacher participate in the development of the IEP. Many district officials told us that implementing this new requirement would be difficult and costly, primarily because substitute teachers must be hired to take over classroom duties when regular teachers are attending IEP meetings during normal school hours. Staff from some school districts also told us that, in their opinions, the IDEA definition of eligibility is unclear or too subjective, making eligibility determination more difficult. For example, one special education director told us that in his experience, almost any child who is referred for an evaluation concerning emotional disturbance is given that diagnosis. Officials from other districts echoed this concern with respect to emotional disturbance and disability in general, especially when possible disability diagnoses are raised in the context of a student’s inappropriate conduct. The definition of disability in general, and the subcategories of emotional disturbance and learning disability in particular, have been controversial. Some have advocated a relatively narrow definition that would emphasize well-understood conditions, while others have recommended a wider definition to encompass less prevalent and less well-defined, but also potentially debilitating, conditions. Researchers and other experts in varying disciplines often disagree on definitions of disability and related conditions, and the prevalence of certain conditions varies. The ambiguity and subjectivity surrounding this process is a source of confusion and frustration for some district officials. Districts’ administrative resources must also be used to meet federal accounting and reporting requirements. Officials from 49 percent of the districts in our telephone interviews identified at least one program’s accounting and reporting requirements as problematic or challenging, as did staff in three of the districts we visited. Many of the comments reflected a general dissatisfaction with having to do the paperwork to comply; however, some school district staff also raised other issues. Some staff members were frustrated by duplication and inconsistency in accounting and reporting requirements across programs. Others expressed the opinion that existing accounting and reporting requirements were not sufficiently focused on program results. Staff at the districts we visited provided specific examples of accounting and reporting requirements they found particularly difficult. Many of these requirements originated at the state level, not from federal laws or regulations. For example, officials from districts in one state mentioned a requirement that equipment purchases over $500 be documented separately when paid for with federal funds. Federal regulations allow for equipment purchases up to $5,000 without additional documentation, but states can impose more stringent requirements (as this state did). A finance director in a district in another state told us that his state had once had the same requirement but had recently raised the threshold to the federal limit of $5,000. This one change, he said, saved the district a great deal of time and trouble. In addition, both state and school district officials told us that auditing and reporting requirements sometimes lag behind federal and state initiatives to provide additional flexibility to school districts. For example, one school district official told us that although the law allows a school to combine federal funds and apply these funds to the entire school under the “schoolwide program” provisions of Title I, state auditors still required separate accounting of funds. Staff in a few districts identified issues related to spending and raising funds as a concern. For example, officials from several districts criticized certain provisions of the Tax Reform Act of 1986. These provisions are designed to prevent state and local governments from selling tax-exempt bonds at a low interest rate and then investing the money to earn a higher interest rate rather than spending it on local projects. One district official told us that the tax provisions not only decrease revenues but may also cause districts to spend the funds more hastily, leading to poorer project decisions. Other district staff talked about restrictions on spending funds; they told us they would like more flexibility in how they could use federal dollars. However, not all district officials we interviewed felt this way. One Title I director, for example, said that he supports the provision in Title I that earmarks funds for parental involvement activities because this requirement is “a spur that makes things happen.” In addition to the direct financial impact of federal requirements, district officials also identified several other types of challenges associated with operational requirements. These nonfinancial issues include logistical challenges in meeting federal timelines, challenges in finding qualified and capable staff or providers to implement programs or requirements, and management challenges in balancing competing goals or needs. To ensure that special education students receive the help they need in a timely manner, school districts are subject to procedural timelines. For example, districts are required to hold the IEP meeting within 30 days after the student’s eligibility has been established. After the IEP meeting, the district must provide the agreed-upon services “within a reasonable period of time.” (The Department of Education has stated that it views a period of 60 days from the date of evaluation as “reasonable” in most cases, although this interpretation does not have the force of law.) Districts are also required to review each student’s IEP annually and to reassess each student’s eligibility once every 3 years. Other procedural timelines govern the district in disciplining special education students and in changing the student’s placement (whether for disciplinary or other reasons). These timelines can protect students with disabilities by ensuring that they receive the services they are promised. A representative from a disability advocacy group told us that some school districts resort to “stalling” rather than provide agreed-upon services. Similarly, one school district official stated that in the past, not enough had been done to ensure that evaluations were done in a timely manner. District officials also told us, however, that these timelines sometimes create logistical problems for them. For example, some district staff viewed the 60-day time frame for conducting evaluations as unrealistic or difficult. The complex nature of the evaluation process and limited staff (especially in small districts and in rural areas) may contribute to the difficulty. For example, one special education director in a small rural district told us that there was only one child psychologist in the area. When this person became ill, the district was unable to meet its timelines. Staff from several districts we visited (and others we interviewed by telephone) told us that the fixed time periods were too rigid and that they would prefer to have more flexibility. Successfully implementing federal programs can be more difficult when a school district faces a shortage of qualified personnel. For example, in one state, staff from several school districts, especially in rural areas, told us that they had a difficult time finding certified special education teachers. In some states (including Colorado, New York, and Louisiana) 15 to 28 percent of special education teachers are not fully certified. Some rural districts may also have difficulty finding providers of certain related services such as physical therapy and speech pathology, according to district and state officials. Similarly, in a Department of Education study, over one-quarter of schools reported that it was very difficult or impossible to fill vacancies for bilingual or ESL teachers. Several district officials and association representatives also cited difficulties in obtaining qualified environmental contractors, again particularly in rural or outlying areas. The facilities manager in one district we visited told us that because there are few qualified asbestos contractors in the area, these contractors have little competition and can charge high prices. Similarly, EPA has warned owners and operators of USTs that the number of qualified contractors is limited. Some district officials expressed concern that certain federal requirements do not match the needs of their communities. These superintendents and district program directors stated that certain federal requirements sometimes supersede established local practices; as a result, they believe they are less able to balance competing educational goals. The most frequently cited example of such a requirement concerns certain provisions of IDEA that limit districts’ ability to discipline special education students by removing them from the classroom. These provisions are designed to prevent districts from denying a free and appropriate public education to a student because of behavior that is related to the student’s disability. As stated in the proposed regulations implementing the 1997 amendments to IDEA, school districts may freely remove a student with a disability from the classroom for up to a total of 10 days in a school year. If a school district wants to remove a child with a disability from the classroom for a cumulative total of more than 10 days in a school year, the district generally must reconvene the IEP team. The district has somewhat more latitude in cases involving weapons or illegal drugs. If a student with a disability carries a weapon or illegal drugs to school, the district may move the student outside the school to an alternative educational setting (such as a special program for troubled youth) for up to 45 days. However, if the district invokes this rule, it must also reconvene the IEP team. In addition, if the student’s parents object to the district’s action, the parents have the right to take the district to a due process hearing. These procedural protections may apply not only to students who have an IEP in place but also to some students who have not yet been declared eligible for special education. A student who has not been determined eligible for special education may be entitled to IDEA’s procedural protections if the school district had knowledge of the child’s potential eligibility. If the child’s parents have expressed concern in writing, the behavior or performance of the child demonstrates the need for special education, or district personnel have expressed concern about the child’s performance or behavior, then the district is presumed to have had knowledge of the child’s potential eligibility. School district officials told us that they find discipline issues to be very challenging because of the need to carefully balance the rights and needs of the child with a disability against the rights and the needs of the other children for a safe and disciplined environment. Some superintendents and special education directors stated that they believe the federal rules are too rigid, reducing their ability to strike this delicate balance. The expense, time, and trouble of going through hearings could deter district officials from disciplining students with disabilities, we were told. One special education director told us that, in effect, “you cannot discipline these kids, period.” As a result, staff from several districts told us that the IDEA rules created a double standard because students with disabilities are treated differently from students who are not labeled as disabled. For these reasons, district officials expressed concern that the requirements are potentially unfair, could lead to morale problems among staff, and could send the wrong message to both students with disabilities and their peers. For example, one special education director told us that it is very hard for him to explain to parents of other students that students with disabilities have procedural rights that their children do not, even for identical offenses. In another district, the superintendent told us about an incident where a student attacked and injured another child and also threatened the classroom teachers, an aide, and the superintendent. Finally, the school called the police. The police and the district attorney recommended that the student be kept off campus. To implement that recommendation, the district had to have an administrative hearing, rewrite the student’s IEP, and provide homebound instruction for the student. Some food service directors told us that they were concerned about their ability to balance the goal of serving nutritious meals with the goal of serving meals that children will eat and enjoy. Staff told us that they feared the new nutrition requirements would result in less food being consumed by students—because either more students would bring lunch from home or (more commonly) larger amounts of food would be discarded (plate waste). One food service director stated that “we’re supposed to implement something here that they don’t get at home.” Although it is too soon in the implementation process to determine if these effects are occurring on a wide scale, the concern may be well founded. In our 1996 survey of school cafeteria managers, the foods with the highest percentage of plate waste were fruits and vegetables. More than half of the middle and high school cafeteria managers believed that increasing the amounts of fruits and vegetables in school meals—as many districts would do to meet the nutrition requirements—would increase plate waste. Similarly, cafeteria managers reported that increasing the number of servings for bread and grains would increase plate waste. Responding to calls for greater flexibility in education programs, the Congress and the Department of Education have implemented several initiatives to give school districts more freedom in designing programs and using federal funds. These efforts—waivers, schoolwide programs, financial flexibility provisions, and consolidated planning—have expanded districts’ options within covered programs and requirements. However, the narrow scope of these initiatives precludes them from addressing the key information, financial, and operational issues identified by school district officials. Since 1994, the Congress and the Department of Education have implemented several efforts to provide additional flexibility to school districts. Waivers—temporary exemptions from certain specific federal requirements—can allow districts to suspend some program rules. Several provisions allow school districts additional flexibility in the use of federal funds. Under a consolidated planning process, school districts can submit one plan or funding application that covers several federal programs, rather than prepare separate documents for each program. Some of these flexibility initiatives have been used infrequently, and their use varies considerably across states. In an attempt to provide states and local school districts increased flexibility, the Congress authorized the Department of Education to grant waivers —temporary exceptions to a limited number of federal requirements. States and school districts can ask the Department to waive certain specific federal requirements when necessary to support local efforts to raise student achievement. The Department can waive certain requirements of (1) ESEA, which contains several key education programs, including Title I; (2) the Perkins Act, which funds vocational education; and (3) the General Education Provisions Act (GEPA) and the Education Department General Administrative Regulations (EDGAR), which contain regulations (such as recordkeeping standards) that apply to education programs in general. In requesting a waiver, school districts are required to describe how a waiver would allow them to improve students’ academic performance. Under the Education Flexibility Partnership Program (Ed-Flex), the Department of Education has delegated to 12 states a portion of its authority to waive certain federal requirements. In these states, school districts generally apply to their state education agency for waivers of federal requirements instead of to the U.S. Department of Education. However, states and school districts in non-Ed-Flex states may also request similar waivers. Instead of these waivers being approved at the state level, the waivers are approved at the federal level through the Department of Education. The authority to grant waivers is limited to specific education programs. Although these include several of the major education programs (including Title I), other important programs are omitted. For example, although the Department can waive some requirements of the Safe and Drug Free Schools program, it cannot waive any of the requirements of IDEA. Similarly, while the Department can grant waivers under the Eisenhower Professional Development program, programs such as Adult Education and Goals 2000 are excluded. In addition, the Department cannot waive any of the requirements that lie within the purviews of other federal agencies. As a result, these waivers do not cover environmental requirements, employment requirements, or the requirements of the food service programs. Even within covered programs, many of the requirements that relate to key federal objectives cannot be waived. For example, waivers are not permitted for any federal education requirement relating to (1) health and safety, (2) civil rights, (3) maintenance of effort, (4) comparability of services, (5) the equitable participation of students in private schools, (6) parental participation and involvement, or (7) the distribution of funds to state and local education agencies. In addition, waivers are not permitted if granting a waiver would undermine the purposes of the federal legislation; and for many programs, certain restrictions might be considered an integral part of the program’s purpose. In its September 30, 1997, report to the Congress, the Department of Education reported that it had received relatively few waiver requests from school districts. According to the report, the Department received 375 waiver requests from school districts from school year 1994-95 until just before school year 1997-98. This represents less than 3 percent of school districts in the nation. Similarly, Ed-Flex states granted relatively few waivers during the first 2 years of the project. Of the waivers that have been granted, nearly two-thirds (64 percent) have concerned two Title I issues. The most frequent use of waivers (43 percent) has been to allow school districts to change the way they distribute Title I dollars to schools within the district. Waivers of these targeting restrictions have allowed some districts to provide extra funding for efforts to improve poor-performing schools; waivers of targeting provisions have also allowed districts to continuously fund schools in cases where poverty rates are relatively similar, rather than shifting funds from school to school from year to year. Districts have also used waivers to expand school eligibility for schoolwide programs (21 percent). Schoolwide programs allow individual schools to combine their Title I funds with other federal dollars (such as funds from IDEA and Perkins Act vocational education programs) to implement a plan to improve instruction in the whole school, rather than targeting Title I funds to specific children who are thought to be at risk. In recent years, several provisions have been undertaken to provide school districts with more flexibility in using federal funds: (1) increased use of schoolwide programs, (2) consolidation of administrative funds, (3) the “unneeded funds” provision, and (4) the Cooperative Audit Resolution and Oversight Initiative (CAROI). Each of these provisions is designed to give school districts more freedom to apply federal funds according to their needs, within a limited set of federal programs. Some district and state officials told us that schoolwide programs can offer greater flexibility and have helped improve schools. However, not all schools are eligible to participate in schoolwide programs, and some eligible schools choose not to. According to Department of Education estimates, of the approximately 53,000 Title I schools in the United States, about 22,000 are eligible for schoolwide programs, and about 15,000 of these have chosen to participate. Under current law, a Title I school is eligible for schoolwide status only if at least 50 percent of the children enrolled in the school or residing in the school attendance area are from low-income families, or if it has received a waiver. Not all eligible local schools and districts endorse or use schoolwide programs; some prefer to target their Title I funds to those children they believe are at greatest risk. For example, one Title I director told us that in his district many schools do not use schoolwide programs because they believe their programs are working well as currently structured. Other financial flexibility provisions are more narrowly designed and less frequently used. For example, for certain federal programs school districts are allowed (with the approval of the state education agency) to consolidate the administrative funds available to the district under certain federal programs. School districts using this provision can combine the funds set aside for district administration under separate federal programs and apply them to the district’s cost of administering this group of programs, rather than applying each funding source only to the administrative costs for that one program. This provision applies to only six programs: Title I, Migrant Education, Eisenhower Professional Development grants, Technology for Education, Safe and Drug Free Schools, and Innovative Education Program Strategies (Title VI). Other programs, such as Even Start, IDEA, and Goals 2000, are not included. In practice, this provision is frequently unavailable and seldom used. In our survey of state education agencies, about one-third reported that they did not allow local school districts to consolidate administrative funds. Further, even when this alternative was allowed, many school districts elected not to use it. In about two-thirds of the states that offered the option, less than 10 percent of districts chose to use the provision. A similar provision, called the “unneeded funds” provision, allows school districts, with the approval of their state education agency, to shift up to 5 percent of funds across certain federal programs: Migrant Education, Eisenhower Professional Development grants, Technology for Education, Safe and Drug Free Schools, and Innovative Education Program Strategies (Title VI). Other programs, such as Perkins Act, IDEA, and Emergency Immigrant Education, are not included. As with consolidation of administrative funds, the “unneeded funds” option is often unavailable and seldom used. In our survey of the 50 state education agencies, only about half the states reported that they allowed local school districts to take advantage of the “unneeded funds” provision. Further, even when this alternative was offered, it was rarely used. In about two-thirds of the states that offered the option, no school districts used it; of the states where it was allowed, in only one did more than 10 percent of districts use the provision. Finally, the Department of Education created CAROI in response to concerns from state and district administrators that the manner in which the Department conducted its audits and other monitoring activities might conflict with the recent focus on providing additional flexibility. The Department’s proposal brief states that CAROI should allow the Department to conduct its audit process in a more flexible, useful, and cooperative fashion, and to more efficiently resolve audit findings so as to promote better program performance. However, because this initiative has not yet been fully implemented, its impact on school districts is uncertain. To obtain funding for certain federal programs, school districts must submit plans or applications to either the state or the federal government.These plans or applications generally contain information on how the funds will be used, certifications that federally-prescribed procedures will be followed, and assurances that federal funds will be spent in accordance with the purpose of the program. However, district officials and education experts expressed concern that the fragmented nature of the application process is not only unnecessarily resource-intensive but also might impede program coordination. In recent years, the Congress, the federal Department of Education, and the states have attempted to improve the planning and application process for federal programs. States and school districts are now allowed (and in some cases required or encouraged) to submit consolidated plans—that is, to submit one plan that covers two or more covered federal programs. In our survey of state education agencies, 24 states said that they require school districts to submit consolidated plans when applying for federal education program funds. Consolidated plans were most often required to include four programs: (1) Title I, (2) the Eisenhower Professional Development program, (3) Safe and Drug Free Schools, and (4) Innovative Education Program Strategies (Title VI). In addition, some states also require school districts to include other programs, such as the Perkins Act vocational education programs, in the consolidated plan. Where consolidated plans are not required, states generally give the school districts the option of choosing consolidated or separate plans. Districts’ use of consolidated plans varied substantially across states. A few states told us that all districts submitted only separate plans, while others reported that all districts submitted consolidated plans. In our telephone survey, districts expressed varying preferences for consolidated or separate plans. Staff in 46 percent of districts said they preferred consolidated plans, 34 percent said they preferred separate plans, and 20 percent had no preference. The reaction was similar in the districts we visited. The program directors and superintendents that preferred consolidated plans often stated that consolidated plans helped promote program coordination. However, others that favored separate plans said that they prefer to keep a more detailed focus on individual programs. Federal flexibility initiatives are generally not structured to address the information, funding, and management issues school districts identified as their primary concerns. Waivers, schoolwide programs and financial flexibility provisions, and consolidated planning neither provide information on federal requirements nor reduce districts’ need for such information. These provisions do not increase federal assistance to school districts, nor do they relieve districts of any of their major financial obligations. Several of these efforts may help districts reduce their administrative costs, but not in those administrative areas that districts identified as key concerns. Similarly, the major flexibility initiatives do not extend to the requirements that posed logistical and management challenges for school districts. As a result, the federal efforts to provide additional flexibility to school districts have limited applicability to those areas that concern district officials the most. Although information-related issues are of key concern to school district officials, the recent flexibility initiatives increase the amount of information districts need, rather than simplify or streamline information on federal requirements. Federal flexibility initiatives do not provide school districts with additional information. Furthermore, because they are not applicable across the range of federal requirements, flexibility initiatives cannot streamline or simplify the information on federal programs. Instead, these efforts actually expand the amount of information school district officials need. To take advantage of the flexibility provisions, district officials must know that the provisions exist and learn how to use them. Gathering this information can be difficult, even if the provisions are relatively simple to use once this information has been obtained. Because these initiatives are program-specific, and each initiative applies to a different set of programs, superintendents and program directors must contend with a complicated set of legislative provisions. Moreover, information on federal flexibility initiatives may be hard to find. In 1997, the Department of Education’s Inspector General reported that many states had not provided guidance to school districts on financial flexibility provisions. Similarly, we found that information concerning federal requirements and flexibility initiatives is often missing from state education agencies’ Internet web sites. As shown in table 4.1, of the 50 web sites maintained by the state Departments of Education, only 7 provided information concerning federal waivers, only 2 provided information on the “unneeded funds” provision, and only 20 provided information on consolidated planning. Similarly, many states did not provide any guidance on the implementation of Title I and IDEA —the largest federal education programs and the focus of many school district concerns. Federal flexibility initiatives neither provide more money nor relieve districts’ major obligations. Although school districts cited the limited nature of federal financial assistance as a key issue, flexibility initiatives do not increase the flow of federal funds to school districts. Additional program funds would have to be appropriated from a Congress that, like school districts, must allocate scarce funds among competing worthy objectives. In addition, because these flexibility efforts do not make fundamental changes in the requirements of federal programs and mandates, school districts continue to be responsible for providing required services. None of the requirements that school districts cited as especially costly—special education, environmental requirements, accessibility, or nutrition standards—can be reduced or eliminated under any of the federal flexibility initiatives. For example, waivers cannot be used to suspend federal requirements relating to health and safety or civil rights. Similarly, although IDEA and Title I funds can be combined and used to support a schoolwide program, the school district is still responsible for providing the appropriate services to disadvantaged and disabled children. As a result, flexibility efforts cannot address school districts’ concerns about substantial program costs in the face of limited resources. Similarly, flexibility efforts can have only limited impact on school districts’ administrative costs. District superintendents and program directors identified two areas—eligibility determination for certain targeted programs and accounting and reporting requirements—as major contributors to the administrative cost of implementing federal requirements. None of the recent initiatives was specifically designed or intended to address these key concerns. Because federal flexibility efforts are limited to a few programs, these initiatives are not able to address problems that arise outside these programs’ parameters. For example, because neither food service nor IDEA is covered, waivers can do nothing to assist school districts in streamlining the time-consuming and costly eligibility determination process for these programs. The narrowness of the flexibility provisions can also hamper districts’ efforts to address administrative issues that cut across many federal programs and requirements. For example, some district officials expressed frustration with the duplication and inconsistency in accounting and reporting requirements across federal programs. Waivers would be unable to address these concerns because no requirements can be waived for many of these programs (including IDEA, food service, and Goals 2000). Similarly, the consolidation of administrative funds can take place only under a few programs. While they do not address districts’ key concerns, waivers and consolidated planning may help some districts streamline processes in other administrative areas. For example, some district staff told us that the consolidated planning and application process takes less staff time than is required to file separate applications for each federal program. Similarly, Texas has granted several statewide waivers under the Ed-Flex program that are specifically designed to reduce paperwork at the district level. In addition, the Department’s plans to improve its auditing process may prove helpful in aligning the auditing process with the current focus on program flexibility. Although only a few districts expressed dissatisfaction with restrictions on spending and raising funds, several flexibility initiatives—schoolwide programs, the “unneeded funds” provision, and consolidated administrative funds—are designed (at least in part) to address these issues. However, these flexibility measures can have only limited impact because not all districts can participate, and even for those that do, sometimes only minor changes are allowed. For example, many states do not allow districts to use the consolidated administrative funds or unneeded funds provisions. Even when districts use these flexibility measures, their impact may be small. For example, the unneeded funds provision allows districts to shift only 5 percent of federal program funds across programs. With federal funds generally accounting for a small percentage (7 percent overall) of total education expenditures, the amount of funding covered under this provision is likely to be very small. For one large urban district we visited, the unneeded funds provision could allow district officials to shift $42,513 from one program to another—this out of about $54 million in federal funds and a total district budget of $491.5 million. For smaller districts, the provision may be even less significant. The restricted scope of flexibility initiatives also precludes them from addressing several of the logistical and management issues that school districts identified as key issues, such as procedural timelines for evaluating the needs of special education students and finding qualified personnel to implement key federal programs. Because the flexibility initiatives do not extend to IDEA requirements, districts cannot use these provisions to address their concerns with timelines. In addition, the federal government is not positioned to reduce the shortages of special education teachers, bilingual or ESL teachers, or qualified environmental contractors. Consequently, districts must rely on their own resourcefulness to overcome the management challenges they face in these areas. For several decades the federal government has provided guidance and financial support to state and local education systems. This assistance has frequently taken the form of targeted programs and mandates designed to advance a variety of federal goals (such as ensuring equal educational opportunity). Recently, however, teachers, parents, and the Congress have emphasized education reform initiatives with the broader and more challenging goal of improving education for all students. Although this emphasis on overall outcomes has enjoyed widespread support, controversy has arisen over the role existing federal programs and mandates will play in achieving this broad purpose. Some individuals—both educators and legislators—believe that loosening or eliminating some federal requirements will enable local school districts to direct more resources to the classroom and to adopt more innovative instructional approaches. However, others have expressed concern that the purposes underlying federal programs (such as ensuring equal educational opportunity) could be compromised if federal requirements are loosened or eliminated. As the education reform movement has accelerated, interest in providing additional flexibility has heightened in both the executive and legislative branches. Some of school districts’ key concerns—particularly the amount of financial assistance provided to school districts—lie beyond the scope of the flexibility initiatives that have been implemented to date. Alleviating these concerns may require more than providing additional flexibility within the existing federal program structure. Other key concerns, including informational and procedural issues, could be partially or fully addressed in the context of flexibility, although current initiatives are not targeted toward these issues. Our findings on school districts’ experiences with federal requirements and regulatory flexibility suggest four lessons to be considered in refining existing federal initiatives and designing new ones. 1. School districts’ concerns are wide ranging rather than centered on a single program or issue. To address these concerns successfully, federal initiatives must also be multifaceted. School districts expressed a wide range of concerns, covering numerous federal programs and reflecting a broad variety of implementation issues. These issues extend to all facets of providing educational and support services—from planning educational programs at the beginning of the year to auditing the accounts at year’s end. Although much of the public debate on education reform has focused on procedural and financial flexibility, such as easing restrictions that govern the use of funds, district superintendents and program directors identified obtaining better information, streamlining eligibility determination and other administrative processes, and obtaining more procedural flexibility as key issues. Because the implementation issues school districts face cut across program and agency lines, initiatives that are narrowly focused can at best provide only limited assistance. Although it may be difficult to design, a broader flexibility initiative (or set of initiatives) that is simple to understand and easy to use, extended across related programs, and widely applicable to many school districts would be better positioned to address districts’ concerns. 2. School districts need—and many lack—adequate information to successfully implement federal requirements and take advantage of flexibility options. Strengthening the knowledge base will be key to the success of both current and future flexibility efforts. As complex organizations, school districts face a large and complicated body of federal requirements that affect many operational areas. According to district officials, the volume and complexity of federal requirements make it difficult to keep track of what is needed to comply and what flexibility is available. With inadequate information, district staff may be more conservative in their interpretation and less innovative in their approach. In addition, district officials cannot take advantage of flexibility mechanisms if they don’t know what flexibility is available or how to apply it to their programs. Experience with various flexibility initiatives—from federal and state waivers to consolidated planning—suggests that increasing awareness among district officials is a crucial factor in how frequently these provisions are used and how helpful they are to school districts. 3. Because states play a key role in overseeing and administering federal programs, in order for flexibility initiatives to succeed, state education agencies must be able and willing to help school districts implement them. For local school districts, state education agencies are the main source not only for information and technical assistance but also for monitoring and oversight. In addition, states impose their own requirements on school districts. Some of these, such as teacher certification standards, may interact with federal requirements but are not associated with specific federal programs. Others, such as requirements concerning IEP forms, arise out of the state’s role in administering federal programs. Because state requirements are stricter than federal ones, initiatives to loosen federal requirements may not have the desired impact unless related state requirements are also modified. As a result, federal legislation alone may be insufficient to create regulatory flexibility that reaches down to the local level. 4. The Congress and the Department of Education face potential conflicts between local officials’ desire for flexibility and the important purposes underlying federal programs and mandates. District officials recognized the benefits of federal requirements to students, parents, and educators. Requirements that students with disabilities receive the additional help they need to achieve in school were widely supported, as were many health and safety requirements. Educators and advocates alike have expressed concern that the opportunity to achieve these goals could be lost if too many federal requirements are loosened or lifted. The Congress and the Department of Education may sometimes face a tension between providing flexibility and still ensuring equal educational opportunity, promoting high quality education, guarding against health and safety hazards, or protecting the integrity of federal funds. In education, where such outcomes are often difficult to identify and measure, it may be especially difficult to ensure that these goals are realized without procedural requirements. Consequently, in some program areas federal authorities may choose to provide local officials with less discretion than they may desire.
Pursuant to a congressional request, GAO reviewed the: (1) major federal requirements that affect school districts; (2) issues that school districts face in implementing these requirements; and (3) impact of the Department of Education's flexibility initiatives on school districts' ability to address these implementation issues. GAO noted that: (1) the wide range of federal requirements that affect school districts reflects many different policy goals and program objectives; (2) many of these federal requirements--especially those that most directly affect teaching--come with federal dollars, but others do not; (3) federal laws and regulations affect school districts in all their varied activities; (4) federal requirements are augmented by state and local requirements and court decisions; (5) district officials generally expressed support for federal programs and mandates, recognizing the importance of goals such as ensuring school safety and promoting equal educational opportunity; (6) at the same time they noted their concerns with implementation issues that made achieving these goals more difficult; (7) rather than focusing on a single federal program or requirement, these implementation issues extend across several broad areas, including the: (a) difficulty in obtaining accurate, timely, and sufficiently detailed information about federal requirements and federal funding; (b) limited funds available to meet program and administrative costs; and (c) logistical and management challenges presented by certain requirements; (8) in the past 5 years, several initiatives have been designed and implemented to provide more flexibility to school districts; (9) however, some of these initiatives have not been widely used by the districts; (10) in addition, because they are narrowly structured, these flexibility initiatives generally do not address school districts' major concerns; (11) although information-related issues are very important to school district officials, the recent flexibility initiatives increase the amount of information districts need, rather than simplifying or streamlining information on federal requirements; (12) federal flexibility efforts neither reduce districts' financial obligations nor provide additional federal dollars; (13) because the flexibility initiatives are limited to specific programs, their ability to reduce administrative effort and streamline procedures is also limited; and (14) broadening the scope of federal flexibility efforts, however, raises concerns about whether the underlying goals of federal programs can be achieved without the guidance of specific regulatory provisions.
The Robert T. Stafford Disaster Relief and Emergency Assistance Act, as amended, established the process by which a state may request a presidential disaster declaration. According to the act, the President can declare a major disaster after a governor or chief executive of an affected tribal government finds that a disaster is of such severity and magnitude that effective response is beyond the capabilities of the state and local governments and that federal assistance is necessary. The act also generally defines the federal government’s role during disaster response and recovery and establishes the programs and processes through which the federal government provides disaster assistance to state, tribal, territorial, and local governments as well as certain nonprofit organizations, and individuals. Figure 1 shows the number of major disasters declared during fiscal years 2004 through 2013. See appendix II for the number of disaster declarations during fiscal years 2004 through 2013 by jurisdiction. Major disaster declarations can trigger a variety of federal assistance programs and activities for governmental and nongovernmental entities, households, and individuals. FEMA’s programs and activities include: PA, Individual Assistance, Hazard Mitigation, and Mission Assignment. FEMA tracks DRF obligations for major disasters in five categories, which consist of four of the agency’s programs and activities, and its administrative costs. Table 1 highlights that, as of April 2014, FEMA obligated $95.2 billion for the 650 major disasters declared during the period of our review. See appendix II for obligations for disaster declarations during fiscal years 2004 through 2013, by jurisdiction. FEMA defines administrative costs for major disasters as costs that support the delivery of disaster assistance. Examples of FEMA administrative costs include the salary and travel costs for the disaster workforce, rent and security expenses associated with field operation locations, and supplies and information technology for field operation staff. According to FEMA officials, the agency’s administrative costs for major disasters primarily support field operation activities; however, administrative costs can also be incurred at FEMA regional offices, headquarters, and other locations, such as FEMA’s National Processing Service Center—that is, a service center where FEMA officials register individuals and families for Individual Assistance.detailed description of FEMA’s administrative costs categories. PA is the largest of FEMA’s major disaster programs, comprising nearly half the funds obligated from the DRF for major disasters. PA funds debris removal and the repair, replacement, or restoration of disaster- damaged facilities. PA also funds certain types of emergency work designed to, among other things, eliminate immediate threats to lives, public health and safety, and property. After a disaster, FEMA works with the affected state, tribal, or territorial government to set up a joint field office (JFO) at or near the disaster site to administer PA grants. FEMA staffing usually consists of (1) permanent full- or part-time employees, (2) nonpermanent reserve staff, and (3) technical assistance contractors. In addition, the JFO may be staffed by personnel from the affected government’s emergency management office. The majority of FEMA staff assigned to the JFO consists of nonpermanent reserve staff who are typically deployed for short-term assignments (i.e., 90 to 120 days). Technical assistance contractors may provide specialized assistance in areas such as structural, mechanical, and civil engineering. Federal, state, tribal, territorial, and local officials each play a significant role in carrying out the steps in the PA funding process. In this process, the state, tribal, or territorial government is the “grantee” that manages and disburses PA funds, and the affected local governments or equivalent entities are typically the “subgrantees” that receive the funds. A subgrantee can be any eligible state or local government entity—for example, school district, county, or township—or certain nonprofits. After a disaster is declared, FEMA, state, tribal, or territorial representatives brief applicants on the program, and FEMA, among other things, assigns a PA coordinator, project officers, and technical specialists to assist the applicant through the PA funding process. After determining the subgrantees and projects eligible for funding, FEMA works with the grantee and subgrantees to develop project worksheets that describe the scope of work and estimated cost. FEMA also conducts historic preservation and environmental reviews as part of its approval process. See appendix IV for further details on the PA process. FEMA reimburses grantees and subgrantees for some expenses associated with administering PA grants. FEMA divides these reimbursements into two categories: section 324 management costs (management costs) and direct administrative costs. Management costs are any indirect costs, any administrative expense, and any other expense not directly chargeable to a specific project. FEMA defines direct administrative costs as costs incurred by the grantee or subgrantee that can be identified separately and assigned to a specific project. Examples of Management Costs and Direct Administrative Costs Management costs Disaster: DR-4068 State: Florida Declared: July 3, 2012 Description: Tropical Storm Debby Amount: $3,910,571 Activity examples: (1) maintaining a website to house documentation and closeout information for Public Assistance (PA) activities, and (2) pay office rent costs for space occupied by PA staff. Examples of management costs: activities related to attending and participating in the applicant’s briefing for the overall PA grant, travel expenses related to general support and not tied directly to one specific project, and activities related to attending, coordinating, and responding to correspondence and meeting requests from FEMA and grantee officials for the overall program and not specific to one project. Direct administrative costs Disaster: DR-1971 State: Alabama Declared: April 28, 2011 Description: severe storms, tornadoes, straight-line winds, and flooding Amount: $1,987,670 Activity examples: (1) develop, write, and review project documentation; (2) assess project eligibility; (3) gather and review payroll data, and invoices; (4) and perform data entry. Examples of direct administrative costs: activities related to developing a detailed site-specific damage description component of one specific project worksheet, activities related to visiting, surveying, and assessing sites for one specific project, and travel expenses related to one specific project. In fiscal year 2008, FEMA changed the way it reimburses grantees and subgrantees for management costs. In November 2007, pursuant to the Disaster Mitigation Act of 2000,that established rates for reimbursing grantee and subgrantee FEMA implemented an interim final rule management costs associated with PA grants. For major disaster declarations, FEMA set management costs at a percentage rate of no more than 3.34 percent of the federal share of projected eligible program costs for assistance. In addition, FEMA officials stated that the 2007 rule changed the process by which grantees and subgrantees received management costs reimbursement. Figure 2 outlines the funding process for grantees and subgrantees for PA administrative costs. According to agency officials, FEMA set the management costs rate at 3.34 percent after determining that it was approximately the historical average of all management and administrative costs. See appendix V for further details on the reimbursement process for management costs. FEMA obligated $12.7 billion from the DRF to cover its administrative costs for the 650 major disasters declared during fiscal years 2004 through 2013. As shown in figure 3, the $12.7 billion represents 13 percent of the $95.2 billion obligated from the DRF for the 650 major disasters during this time period. Figure 3 provides a breakout of FEMA’s cost categories for major disasters and how much was obligated for each category from the DRF. While figure 3 shows the total amount FEMA obligated for its administrative costs from the DRF, it is significantly impacted by large disasters such as Hurricane Katrina. For example, of the $12.7 billion FEMA obligated for its administrative costs during fiscal years 2004 through 2013, $5.4 billion, or 43 percent, related to Hurricane Katrina. Therefore, the 13 percent shown in figure 3 is significantly affected by one disaster. Figure 3 also does not provide information about FEMA’s administrative costs for individual disasters. One measure FEMA uses to understand administrative costs for a single major disaster is a disaster’s administrative cost percentage—that is, administrative cost obligations A benefit to analyzing administrative cost divided by total obligations.percentages is that FEMA can assess whether a reasonable amount of administrative costs were obligated for a single disaster. Our analysis shows that FEMA’s average annual administrative cost percentages for major disasters have doubled since fiscal year 1989. Specifically, we calculated the average administrative cost for each fiscal year from 1989 to 2013 to identify long-term trends in FEMA’s administrative costs. For example, to determine the average for fiscal year 2013, we calculated the administrative cost percentage for each of the 65 major disasters declared in fiscal year 2013. Next, we calculated the average of the 65 administrative costs percentages. As shown in figure 4, FEMA’s average administrative cost percentage was 18 percent in fiscal year 2013, which more than doubled the average of 7 percent in fiscal year 1989. FEMA categorizes major disasters using three event levels—small, medium, or large—based on the projected amount of federal funding to be obligated for disaster assistance. Small disasters have projected disaster assistance of less than $50 million, medium disasters have projected disaster assistance from $50 million to $500 million, and large disasters have projected disaster assistance of $500 million to $5 billion.FEMA’s administrative cost percentages are typically higher for small disasters than for large disasters. For example, FEMA may be able to achieve economies of scale for relatively large disasters, thereby reducing the related administrative cost percentage. Examples of administrative costs and other DRF obligations for a small and large disaster are provided in appendixes VI and VII. Our analysis shows that FEMA’s average administrative cost percentage at least doubled for all sizes of disasters—small, medium, and large. For example, for small disasters, FEMA’s average administrative cost percentage was 20 percent during the fiscal year 2004-to-2013 period, doubling the average of 10 percent during the fiscal year 1989-to-1998 period. Table 2 shows FEMA’s average administrative costs percentage for small, medium, large, and all disasters combined. We also found a wide variance in administrative costs percentages for disasters of similar sizes in fiscal years 2004 to 2013, and for small disasters, instances where FEMA charged the DRF more for administrative costs than assistance. Specifically: For the 518 small disasters (total disaster assistance of less than $50 million), administrative cost percentages averaged 20 percent, and ranged from less than 1 percent to 74 percent. Sixteen or 3 percent of the 518 small disasters had administrative costs that were equal to, or exceeded, the disaster assistance. For the 112 medium disasters (total disaster assistance from $50 million to $500 million), administrative cost percentages averaged 12 percent, and ranged from 1 percent to 29 percent. None of the medium disasters had administrative costs that were equal to, or exceeded, disaster assistance. For the 20 large disasters (total disaster assistance from $500 million to $5 billion), administrative cost percentages averaged 13 percent, and ranged from 3 percent to 25 percent. None of the large disasters had administrative costs that were equal to, or exceeded, disaster assistance. As we reported in September 2012, FEMA created a management guide in November 2010 that included targets for administrative cost percentages; however, the agency does not consider the targets formal guidance and does not hold its officials accountable for meeting the targets. FEMA did not require that the targets be met because according to FEMA officials, the agency’s intent was to provide general guidance rather than to stipulate a prescriptive policy or formula. Further, according to FEMA’s management guide, one limitation of the targets is that “there will be situations where the levels are inappropriate due to extraordinary circumstances.” As mentioned earlier, FEMA categorizes major disasters using three event levels, essentially small, medium, or large, based on the amount of federal funding for the disaster, and the 2010 guidance established target ranges for administrative cost percentages for each category: Small disasters have an administrative cost percentage target range of 12 percent to 20 percent. Medium disasters have an administrative cost percentage target range of 9 percent to 15 percent. Large disasters have an administrative cost percentage target range of 8 percent to 12 percent. To help FEMA control its administrative costs, we recommended in 2012 that FEMA implement goals for administrative cost percentages and monitor performance to achieve these goals. FEMA has not yet implemented our recommendation and plans to provide us a response by the end of the second quarter of fiscal year 2015. Table 3 shows the potential reduction in administrative costs had FEMA met the high and low ends of its target range for the 650 major disasters declared during fiscal years 2004 through 2013. For example, 259, or 40 percent, of the 650 major disasters exceeded the high end of FEMA’s target range for administrative cost percentages. Had FEMA’s administrative cost percentages for the 259 major disasters equaled the high end of the target range, obligations for administrative costs would have been $2.3 billion lower. FEMA did not develop the target ranges until November 2010, and FEMA does not require its officials to stay within or below the target ranges. As a result, this analysis does not indicate the amount that FEMA should have saved during this period. Rather, the analysis can be used as an indication of potential cost savings in the future, if the target ranges are met. Table 4 shows the potential reduction in administrative costs had FEMA met the high and low ends of its target range for the 209 disasters declared during fiscal years 2011 through 2013—that is, the time period since FEMA created its administrative cost targets. As we reported in 2012, FEMA officials said that it was difficult to identify the principal factors causing increases in administrative costs since fiscal year 1989 because of the complexities associated with the underlying factors, particularly in light of the span of time involved. We further reported that FEMA officials stated that the agency has evolved from one originally focused on grants management to an organization implementing increasingly more complex programs with an increasingly sophisticated and specialized workforce. In addition, FEMA officials told us that there is a tension between efficiency and effectiveness of delivering disaster assistance—for example, reducing administrative costs could negatively affect FEMA’s ability to help individuals and families recover from a disaster. FEMA officials provided information on several ongoing efforts to better manage and control its administrative costs, as discussed below. FEMA’s 2010 guidance. In November 2010, FEMA issued guidelines intended to better control its administrative costs.that FEMA had “witnessed spiraling administrative costs and staffing levels” since fiscal year 1989, and that all agency managers would be accountable for controlling administrative costs. It also noted that staffing levels had risen significantly faster than disaster activity, with a tripling in The guidance states the average number of FEMA staff deployed per year. For example, the average number of FEMA staff deployed to a disaster increased significantly, from fewer than 500 in fiscal year 1989 to over 1,500 in fiscal year 2009. The memorandum noted that FEMA had provided limited guidance for controlling costs and staffing levels (i.e., the main driver of administrative costs), and that FEMA managers did not have uniform guidance to control administrative costs. The guidance also included best practices for staffing levels and staff deployment times, and considerations for whether to use less costly Virtual JFOs instead of physical offices. Moreover, the guidance set targets for administrative cost percentages. According to FEMA officials, the guide is used to train field leadership and reflect the importance of administrative cost management. However, FEMA did not require that this guidance be followed or targets be met because the agency’s intent was to ensure flexibility and provide general guidance rather than to stipulate a prescriptive policy or formula. FEMAStat. In 2012 and 2013, the agency’s FEMAStat team collected and analyzed data on administrative costs associated with managing disasters. The FEMAStat team found that FEMA (1) did not leverage benchmarks and information about past deployments to inform staffing decisions; (2) cost drivers, such as workload forecasts and local capacity, were not accessible for its analytical purposes; and (3) had not defined when the less costly Virtual JFOs should be used. It also found that there is little predictability in the number of FEMA PA staff deployed to disasters—for instance, one disaster had roughly three times as many PA staff as another similar-sized disaster. The FEMAStat team concluded that while it had demonstrated that data from past disasters could be used to control costs, FEMA had to determine who has the authority and responsibility to monitor and question these staffing levels, as well as put into place a process to formalize this practice. As of September 2014, FEMA officials said that the agency was working to implement the FEMAStat recommendations. As a result, it is too early to assess whether this effort will reduce administrative costs. Strategic plan goal. In July 2014, FEMA issued its Strategic Plan for 2014-2018, which includes a goal to reduce its average annual percentage of administrative costs, as compared with total program costs, by 5 percentage points, by the end of 2018. According to FEMA officials, the goal to reduce FEMA’s administrative costs reflects its importance to FEMA. FEMA officials stated that the agency determined that a 5 percentage point reduction was aggressive, but reasonable based on its review of administrative costs for previous disasters. In conjunction with this goal, FEMA officials also told us that they revised their definition of administrative costs as of October 1, 2014. For example, according to FEMA officials, the previous definition of administrative costs included the cost associated with urban search and rescue operations; however, they said these costs would be better defined as operational. As of October 2014, FEMA had yet to determine the starting percentage from which it will reduce 5 percentage points. According to FEMA officials, the agency will identify this starting point during fiscal year 2015 using the new definition of administrative costs. Other ongoing steps to reduce administrative costs. FEMA has several other ongoing actions that have helped, or could eventually help, to reduce administrative costs, as follows: Placing greater scrutiny on the amount of overtime worked by disaster assistance personnel. For example, according to FEMA, since 2011, FEMA has decreased the average overtime per person from almost 16 hours per pay period to 7.5 hours per pay period. Eliminating unnecessary or duplicative telecommunication services. For example, according to FEMA, since April 2013, FEMA has reduced monthly telecommunication costs by more than $1 million per month. Implementing new flexibilities and pilot programs authorized by the Sandy Recovery Improvement Act of 2013 that could help reduce its administrative costs. For example, the act authorizes FEMA to implement alternate procedures for administering the PA program that allow FEMA to provide grants based on up-front estimated costs for PA projects. According to FEMA, this could alleviate some of the administrative burden for FEMA and grantees during the PA process. However, according to FEMA officials, these efforts are in the very early stages, and FEMA will not know for several years whether these pilot programs actually lead to reduced administrative costs. FEMA officials do not have an integrated plan for how they will better control and reduce administrative costs for major disasters, and have not identified the office or officials accountable for overseeing administrative costs. FEMA’s November 2010 management guide stated that “little emphasis has been placed on managing overall costs.” Since the guide was created, FEMA officials have taken a number of steps intended to better control and reduce the agency’s administrative costs for major disasters. However, despite FEMA’s efforts since November 2010, FEMA’s average administrative cost percentages have not significantly decreased. According to FEMA officials, they have not developed a plan that integrates the steps they are taking to better control and reduce costs, and that highlights clear roles and responsibilities, performance metrics, milestones, and a monitoring system to assess their progress. In addition, according to FEMA officials, the agency has not designated an office or senior officials accountable for controlling administrative costs. For example, as part of the FEMAStat initiative, FEMA officials highlighted that it was unclear who had authority and responsibility to monitor and question staffing levels, even though staffing is the largest driver of administrative costs. According to A Guide to the Project Management Body of Knowledge, which provides standards for project managers, specific goals and objectives should be conceptualized, defined, and documented in the planning process, along with the appropriate steps, time frames, and milestones needed to achieve those results. According to the Standards for Internal Control in the Federal Government, managers should compare actual performance to planned or expected results throughout the organization and analyze significant differences; it also states that an agency’s organizational structure should clearly define key areas of authority and responsibility and establish appropriate lines of reporting. Designating an office or senior official with sufficient time, responsibility, authority, and resources can help improve FEMA’s accountability and progress. Until a plan that integrates FEMA’s initiatives is created, FEMA will continue to lack assurance that it has an effective and efficient plan for reaching its goals to better control and reduce costs. During our interviews with FEMA officials, they agreed that indentifying agency officials who will be accountable, and creating a plan, would be beneficial to reach the agency’s goals to better control and reduce administrative costs. In analyzing FEMA administrative costs, we found that that the agency does not track or analyze its administrative costs for major disasters by individual DRF program—including PA, Individual Assistance, and Hazard For example, FEMA could tell us how much it obligated for its Mitigation.own administrative costs, in total, for the Hurricane Sandy disaster response, but not how much it has obligated for its administrative costs related to each DRF program. Without administrative cost data by program, neither we, nor FEMA, can determine whether increases in administrative cost percentages since fiscal year 1989 were greater for one program than for another, or greater for certain components, such as staffing for a particular DRF program. For example, in responding to disasters, costs may be higher for providing individual assistance than other programs, which can drive up the administrative costs for the entire disaster. Better understanding the drivers of administrative costs and collecting these data could allow FEMA to more effectively identify cost drivers and help control its administrative expenses. According to FEMA officials, gathering administrative costs data by DRF program would require additional resources and technical changes; however, the agency has not assessed the costs versus the benefits of tracking the data. FEMA officials stated that assessing the costs and benefits would be helpful, and they agreed that administrative costs should be tracked by program. According to Standards for Internal Control in the Federal Government, program managers need financial data to determine whether they are meeting their goals for accountability for effective and efficient use of resources. Further, FEMA’s 2014-2018 Strategic Plan emphasizes the need for data-driven decision making.assessing the costs and benefits of tracking administrative cost data by DRF programs, FEMA could determine whether such data could be useful for identifying long-term trends, more effectively controlling its administrative costs, and better tailoring its administrative costs to program delivery. For the 650 major disasters declared during fiscal years 2004 through 2013, FEMA obligated $1.7 billion to reimburse grantees and subgrantees for all types of administrative costs associated with PA grants. The $1.7 billion is 2 percent of the $95.2 billion obligated from the DRF during this period. Figure 5 provides total DRF obligations, including grantee and subgrantee administrative costs, for the 650 major disasters declared during fiscal years 2004 through 2013 by FEMA cost category. Administrative costs reimbursed to grantees and subgrantees, as a percentage of total PA funding, ranged from 0.9 percent to 4.7 percent per year, as shown in table 5. In fiscal year 2008, FEMA implemented a rule that changed the administrative reimbursements available for grantees and subgrantees of PA grants. Under the rule, grantees and subgrantees are eligible for two forms of administrative reimbursements: management costs and direct administrative costs. As shown in table 6, FEMA obligated $383 million in management costs and $132 million in direct administrative costs for major disasters declared during fiscal years 2008 through 2013. Many PA projects for major disasters declared during fiscal years 2008 through 2013 have not been completed; thus obligations for management costs and direct administrative costs in tables 5 and 6 will likely increase as these projects are completed. In November 2007, FEMA implemented a rule change that was intended to simplify and clarify the method it uses to reimburse grantees and subgrantees for certain costs incurred while administering PA grants. FEMA officials and grantees we interviewed were generally satisfied with the revised process for claiming and reimbursing management costs. However, according to FEMA officials, the 2007 rule change led to an unexpectedly high rate of claims for direct administrative costs. In addition, the lack of clarity and specificity in FEMA’s policies and guidance for direct administrative costs has led to increased complexity and workload for FEMA, grantees, and subgrantees. FEMA’s 2007 rule was intended to simplify and clarify the method FEMA uses to reimburse grantees and subgrantees for certain costs incurred while administering PA grants.funding categories with a management cost funding category based on a single percentage of the federal share of projected eligible program Specifically, the rule replaced three costs.are generally satisfied with the process for claiming and reimbursing management costs. Specifically, FEMA PA officials we interviewed in 6 of 10 regional offices reported that the review process for management costs is very efficient or somewhat efficient. Six of 10 grantees we interviewed said it was easy or very easy for them to meet FEMA’s requirements for documenting and claiming management costs. FEMA officials and grantees we interviewed reported that they According to FEMA officials involved in creating the 2007 rule, the management costs rate was intended to cover some of the expenses incurred by both grantees and subgrantees of the PA program, but the rule was designed to allow grantees flexibility to determine the appropriate amount or percentage of management costs to provide, or “pass through,” to subgrantees. According to FEMA headquarters officials and grantees we interviewed, grantees generally do not pass through any management costs to subgrantees. For example, none of the 10 grantees we interviewed said that they had passed through management costs to subgrantees for any disaster. Seven of 10 grantees we interviewed said that the management costs rate is not enough to cover their costs, and 5 of 10 grantees we interviewed cited the lack of funds provided by management costs as a reason they do not pass through funds to subgrantees. In addition, four grantees said that passing through management costs to subgrantees would create additional administrative burdens. As the primary recipient of FEMA PA grants, grantees are responsible for ensuring that subgrantees properly expend and account for the management cost funds. Although grantees generally do not pass through management costs, they can provide other forms of assistance to subgrantees, such as funding a portion of the subgrantee’s nonfederal cost share for project costs or assisting subgrantees with preparing damage assessments. FEMA PA officials stated that the agency is developing proposals for modifying the 2007 rule but could not provide information on these proposals because they are under internal review. FEMA officials stated that the management costs rate for PA is based on an analysis of historical obligations of administrative cost reimbursements for both grantees and subgrantees. However, a 2011 report by the Homeland Security Institute on behalf of FEMA stated that the rate “does not adequately address the administrative cost burden incurred by the grantee.” FEMA officials reported that the agency’s review of the 2007 rule is ongoing and is being used to, among other things, inform potential changes to the management costs rate. According to FEMA officials, the 2007 rule led to an unexpectedly high rate of claims for direct administrative costs. In contrast to the management costs process, FEMA officials, grantees, and subgrantees we interviewed said that the use of direct administrative costs reimbursement has increased administrative complexity and workload. FEMA officials stated that, in developing the 2007 rule, they did not anticipate direct administrative costs claims beyond limited, unique circumstances, such as paying an environmental specialist to conduct an extensive review of a single project. As a result, the 2007 rule did not define or include rules on reimbursements for direct administrative costs. According to FEMA officials, after the new rule was issued, grantees and subgrantees began requesting reimbursements for direct administrative costs much more frequently than FEMA officials expected. Based on our analysis of FEMA data, for major disasters declared during fiscal years 2008 through 2013, FEMA processed approximately 170,000 transactions for direct administrative costs. One potential reason for the increase, according to FEMA officials, is that without a pass through of management costs from the grantee, the only reimbursement for administrative costs that subgrantees may receive is through direct administrative costs. In contrast, prior to the 2007 rule, FEMA provided both grantees and subgrantees an administrative allowance calculated as a sliding scale percentage of net eligible costs of assistance. See appendix VIII for details on the administrative allowance used prior to the 2007 rule. In March 2008, FEMA released Disaster Assistance Policy 9529.9 to provide grantees and subgrantees additional guidance on management costs and direct administrative costs. The policy defines direct administrative costs as costs incurred by a grantee or subgrantee that can be identified separately and assigned to a specific project and states that, among other things, direct administrative costs would be limited to actual reasonable costs incurred for a specific project. The policy references Office of Management and Budget (OMB) Circular A-87, which states that a cost is reasonable if, in its nature and amount, it does not exceed that which would be incurred by a prudent person under the circumstances prevailing at the time the decision was made to incur the cost. The policy further states that FEMA will reimburse direct administrative costs that are properly documented. FEMA was alerted to concerns with the policies and guidance for direct administrative costs within a year of the release of Disaster Assistance Policy 9525.9. In March 2009, during the response to record flooding in Cedar Rapids, Iowa, the JFO for the disaster and the FEMA regional office produced a report that noted potential problems with the implementation of direct administrative costs and attempted to develop a set of tools and standard operating procedures to assist FEMA PA staff The with evaluating direct administrative costs submitted for projects.report stated that the policy “has created an environment where ambiguous project expenses related to administrative activities are being submitted for reimbursement to FEMA by subgrantees as direct administrative costs.” The report described the two main contributing factors to this situation as “the ambiguity regarding what constitutes a direct administrative cost and the lack of concrete, quantifiable guidelines for a ‘reasonable’ direct administrative cost.” Figure 6 provides an example of a PA project following the Iowa flooding. In September 2009, FEMA’s Assistant Administrator issued a memorandum to the FEMA regional offices with additional guidance on implementing the management costs and direct administrative costs policies.several factors when evaluating the reasonableness of contract costs: the method of contracting for the services, the skill level of persons performing the activities, the amount of time required to perform an activity, and the amount of time required to perform a particular task. The memorandum also clarifies that grantees and subgrantees may use contractors to perform grant management functions. To further assist FEMA staff, the memorandum includes a table of PA administrative activities classified as either management costs or direct administrative costs. However, the memorandum notes that the table is not an exhaustive list and there may be exceptions to the categorizations. The memorandum states that FEMA staff must consider FEMA officials, grantees, and subgrantees we interviewed said that FEMA’s guidance for direct administrative costs lacks clarity and specificity. For example, grantees and subgrantees told us that FEMA’s policies and guidance for direct administrative costs lack the information needed to determine whether a claim is eligible, reasonable, and properly documented. Furthermore, FEMA PA officials we interviewed said that without clear and specific guidance, they do not have sufficient information to evaluate direct administrative costs claims, leading to inconsistency in the approval process as well as disputes between FEMA and the grantees and subgrantees. Seven of the 10 FEMA PA officials we interviewed at regional offices, and 6 of the 10 grantees we interviewed, said that FEMA policies and guidance do not provide sufficient instruction for their staff to determine whether a claim is eligible. In addition, 9 of 10 FEMA PA officials, and 8 of 10 grantees, said that FEMA policies and guidance do not provide sufficient instruction for their staff to determine whether a claim is reasonable. See figure 7 for FEMA PA officials (Branch Chief) responses regarding FEMA polices and guidance related to direct administrative costs. Figure 8 shows grantee responses regarding FEMA polices and guidance related to direct administrative costs. FEMA PA officials from 3 regional offices stated that FEMA currently does not have sufficient guidance on what constitutes proper supporting documentation for approving or denying a claim for direct administrative costs. In addition, two grantees and two subgrantees we interviewed reported that it is difficult to know whether they have collected sufficient documentation to support their claim. For example, one subgrantee we interviewed reported having a direct administrative cost claim or estimate approved by FEMA field staff, only to have it reduced during a higher- level review. The subgrantee told us that this lack of consistency makes it difficult for them to budget for their administrative expenses. FEMA PA officials in headquarters and in regional offices said that disputes over direct administrative cost claims or estimates have led to a more contentious environment among FEMA, grantees, and subgrantees. According to FEMA PA officials, reviewing and processing direct administrative costs reimbursements is labor-intensive and has increased the agency’s workload. FEMA PA officials from 4 regional offices said they spend considerable resources reviewing and determining whether direct administrative cost claims meet the criteria in FEMA’s policies and guidance. For example, for each project wherein direct administrative cost are claimed, FEMA officials may have to review time keeping, payroll, and travel records as well as salary information for all grantee and subgrantee personnel assigned to work on a particular project. FEMA PA officials from all 10 regional offices reported that reviewing a direct administrative cost claim can vary from 10 minutes to several weeks or months. According to FEMA headquarters officials, the increased workload associated with reviewing these claims reduces the amount of FEMA staff available for other essential tasks. Two-thirds, or 14 of 21, of grantees and subgrantees we interviewed stated that the time and resources necessary to document and claim direct administrative costs increases their administrative burden. In addition, three grantees and two subgrantees we interviewed reported that they do not always have the personnel or resources needed to track, document, and submit all potential claims. To receive reimbursement, grantees and subgrantees must track their time by project. Although doing so is not required by FEMA, three grantees we interviewed said that they track their administrative expenses in 15 minute intervals. Eight out of 21 grantees and subgrantees reported to us that they have employed private contractors to perform grant management functions because of the administrative burden and complexity of FEMA’s reimbursement process. According to FEMA officials, the greater use of contractors among grantees and subgrantees has raised the cost to FEMA because contractors tend to charge higher hourly rates than state, tribal, territorial, and local government officials. However, FEMA officials also said that, in some cases, contractors perform grants management functions that assist grantees and subgrantees. In addition, one grantee we interviewed said that it hired a contractor for grants management related to Hurricane Sandy because the relatively small number of state personnel available for disaster recovery was insufficient given the severity of the damage, which FEMA officials estimate may result in as many as 5,100 PA projects. The grantee said that the contractor was more effective at seeking reimbursement from FEMA than their state or local officials would have been, given the contractor’s expertise and experience working with FEMA. According to the grantee, the contractor has increased the amount of funds FEMA obligated to the state. For example, the grantee said that the contractor discovered $8 million in eligible funding that FEMA PA staff had not included on the approved project worksheet. The contractor stated that, in less than 1 year, it had increased the amount of FEMA PA funding to the grantee by more than $60 million while charging about $10 million for its services. In response to these claims, FEMA officials told us that these under-obligated projects could have been noticed and corrected during project closeout. FEMA headquarters officials we interviewed said that the number of appeals related to administrative costs have increased. Under the PA program, grantees and subgrantees may appeal any FEMA decision regarding eligibility for, or the amount of assistance. From November 2007 to May 2014, FEMA reported receiving 182 first appeals and closing 21 second appeals related to direct administrative costs. FEMA officials stated that prior to the 2007 rule, there were no appeals related to administrative costs and that these new claims have increased the agency’s administrative burden by pulling resources from more pressing areas. Furthermore, the number of both first and second appeals is likely understated because many large, complex projects have not been completed. One subgrantee told us that they chose not to appeal FEMA’s decision to reduce their direct administrative costs claim, as the appeal process may take months or years to resolve and the amount in question was small relative to the total project funding. As discussed above, FEMA officials and grantees reported that FEMA policies and guidance do not provide sufficient instruction to determine whether a claim is eligible or reasonable and that clarifying such guidance would help to address these issues. In addition, without clear and specific guidance on how to evaluate and approve or deny direct administrative costs claims, FEMA regional officials and field staff must make difficult, subjective determinations when evaluating claims. This leads to inconsistent application of the direct administrative costs policy and guidance, creating confusion and frustration among grantees and subgrantees, and leading to additional appeals. OMB’s Final Bulletin for Agency Good Guidance Practices states that “well designed guidance documents serve many critical functions in a regulatory program. Guidance documents, used properly, can channel the discretion of agency employees increase efficiency,” among other things. In addition, according to FEMA’s 2014-2018 Strategic Plan, the agency intends to “focus on improving and streamlining community recovery services, including grant processing and related interactions” with the goal of ensuring that “disaster services are transparent, efficient, and effective in meeting the needs of survivors.” According to FEMA officials, the agency recognizes the unintended complexity and additional workload that the 2007 rule created and is working to address this issue. For example, FEMA officials told us that the agency is considering a pilot program for direct administrative costs for the state of New York and select subgrantees in the state on certain PA projects. According to FEMA officials, this pilot program will be designed specifically for Hurricane Sandy recovery operations and may use a sliding-scale or fixed percentage administrative allowance, which could reduce the administrative burden and complexity associated with the current direct administrative costs process because administrative costs will be agreed upon before the project begins. In addition, as described earlier in the report, FEMA PA officials stated that the agency is developing proposals for modifying the November 2007 rule. According to the PA officials, these modifications could also change the rules for direct administrative costs; however, changes would only affect major disasters declared after the release of the new rule. As of April 30, 2014, 516, or 79 percent, of the 650 disasters declared during fiscal years 2004 through 2013, have not been completed. Therefore, despite FEMA’s potential new rule, recovery operations for hundreds of disasters could benefit from FEMA officials clarifying the agencies’ guidance and minimum documentation requirements for direct administrative costs claims, which would help FEMA and its grantees better determine whether administrative costs are reasonable and potentially help reduce complexity in the process. Major disaster declarations have increased significantly in recent decades, and FEMA has obligated $95.2 billion from the DRF for the 650 major disasters declared during fiscal years 2004 through 2013. FEMA’s administrative cost percentages have risen for major disasters of all sizes, and FEMA has not implemented our 2012 recommendation to implement goals for administrative cost percentages and monitor performance to achieve these goals. Although FEMA has taken steps to better control and reduce its administrative costs since November 2010, administrative costs have not decreased. Establishing an integrated plan that designates an office or senior officials responsible for controlling and monitoring administrative costs, interim time frames, and milestones would help FEMA to better track progress in addressing this longstanding issue and achieving its goals to better manage and reduce administrative costs. Without an integrated plan, FEMA officials’ actions may not be implemented or coordinated to ensure they most effectively achieve the agency’s goals. FEMA would also be better positioned to identify long- terms trends in its administrative costs by assessing the costs and benefits of tracking and analyzing these costs by individual programs for major disasters. Doing so could provide FEMA with information to better manage these costs. FEMA’s 2007 rule was intended to simplify the reimbursement process for grantee and subgrantee administrative costs. However, the lack of clarity and specificity in FEMA’s policies and guidance for direct administrative costs has led to increased complexity and workload for FEMA, grantees, and subgrantees. Although FEMA was alerted to the increased complexity and workload in March 2009, the agency has not taken steps to resolve these problems. FEMA officials stated that they are considering modifications to the 2007 rule that may include changes to direct administrative costs rules and guidance; however, these steps will only affect major disasters declared after the issuance of the new rule. Seventy-nine percent of the 650 disasters declared during fiscal years 2004 through 2013, have not been completed. As a result, despite FEMA’s potential new rule, recovery operations for hundreds of disasters could benefit from FEMA officials clarifying the agencies’ guidance and minimum documentation requirements. These changes would help FEMA and its grantees better determine whether administrative costs are reasonable and potentially help reduce complexity in the process. To increase the efficiency and effectiveness of processes related to administrative costs for major disasters, we recommend that the FEMA Administrator take the following three actions: 1. Develop an integrated plan to better control and reduce FEMA’s administrative costs for major disasters. The plan should include steps the agency will take to reduce administrative costs, milestones for accomplishing the reduction, and clear roles and responsibilities, including the assignment of senior officials/offices responsible for monitoring and measuring performance. 2. Assess the costs versus the benefits of tracking FEMA’s administrative cost data for major disasters by Public Assistance, Individual Assistance, Hazard Mitigation, and Mission Assignment, and if feasible, track this information. 3. Clarify the agency’s guidance and minimum documentation requirements for direct administrative costs claims by grantees and subgrantees of the Public Assistance program. We provided a draft of this report to DHS for their review and comment. DHS provided written comments on November 25, 2014, which are summarized below and reproduced in full in appendix IX. DHS concurred with all three of our recommendations and described planned actions to address them. In addition, DHS provided written technical comments, which we incorporated into the report as appropriate. DHS concurred with our first recommendation that FEMA develop an integrated plan to better control and reduce FEMA’s administrative costs for major disasters. DHS stated that FEMA has prioritized improving administrative cost management and made reducing disaster administrative costs a performance goal in its Strategic Plan. Additionally, FEMA has developed a standardized definition of administrative costs and established a Disaster Administrative Cost Integrated Project Team (IPT). The IPT is charged with taking actions to institutionalize the new administrative cost definition, scope, and posture the agency for improved administrative cost management. FEMA plans to complete this effort by September 30, 2015. These actions, if implemented effectively, could address our recommendation and help control and reduce FEMA’s administrative costs. However, the extent to which the planned actions will fully address the intent of this recommendation will not be known until the agency completes its review and implements an integrated plan. DHS also concurred with our second recommendation that FEMA assess the costs versus the benefits of tracking FEMA’s administrative costs data for major disasters by Public Assistance, Individual Assistance, Hazard Mitigation, and Mission Assignment, and if feasible track this information. DHS stated that FEMA is assessing the cost versus the benefit of tracking the information by September 30, 2015 to determine if this information can be captured and used to inform future decision making. DHS also concurred with our third recommendation that FEMA clarify its guidance and minimum documentation requirements for direct administrative costs claims by grantees and subgrantees of the Public Assistance program. For future disasters, FEMA is assessing its direct administrative cost pilot program for Hurricane Sandy recovery operations. If the pilot is successful, results from this pilot could inform the development of additional guidance. For current and other past disasters, FEMA will provide clarifying guidance on direct administrative claims and documentation requirements by October 31, 2015. We will continue to monitor DHS’s efforts. We will send copies of this report to the Secretary of Homeland Security, the FEMA Administrator, and appropriate congressional committees. If you or your staff have any questions about this report, please contact me at (404) 679-1875 or curriec@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other key contributors to this report are listed in appendix X. This report addresses the following questions: (1) To what extent were disaster relief funds (DRF) obligated to cover the Federal Emergency Management Agency’s (FEMA) administrative costs for major disasters during fiscal years 2004 through 2013, and what steps, if any, has FEMA taken to control its administrative costs, and (2) To what extent were DRF funds obligated to cover grantee and subgrantee administrative costs for Public Assistance (PA) grants, and what has been the impact of FEMA’s November 2007 regulatory changes on administrative costs reimbursed to grantees and subgrantees for the PA program. To address our first objective, we obtained and analyzed data from FEMA’s Integrated Financial Management Information System (IFMIS) on the amount of DRF obligations for administrative costs to FEMA for each major disaster declared by the President during fiscal years 1989 through 2013. We focused on this time frame because it contains current data for major disasters. It also comprises the time period after FEMA was merged into the newly created DHS, on March 1, 2003, and predates Hurricane Katrina in 2005. Fiscal year 1989 is the earliest year for which FEMA maintains obligations data. We focused primarily on fiscal years 2004 through 2013; however, to provide historical context and to compare results across similar periods, we also reviewed obligations data during fiscal years 1989 through 2013. To determine FEMA’s administrative cost percentages for disaster declarations, we obtained actual and projected DRF obligations for all 1,332 major disasters declared during fiscal years 1989 through 2013. To assess FEMA’s current practices, we compared FEMA’s administrative cost percentages for disasters declared during fiscal years 2004 through 2013 with FEMA’s target ranges for administrative cost percentages. Specifically, we calculated the percentage of total federal assistance that was obligated for administrative costs for each disaster in our scope. Next we determined whether these percentages were above or below FEMA’s administrative cost targets and whether FEMA’s administrative costs would have changed during the period had FEMA met the targets. To identify potential trends over time, we compared FEMA’s administrative cost percentages during fiscal years 1989 through 1998 with FEMA’s administrative cost percentages during fiscal years 2004 through 2013. According to FEMA officials, administrative costs are typically higher in the early months of a declaration, typically decreasing as the declaration matures (that is, as labor-intensive response activities decline). In order to ensure the results of our analyses were not skewed by major disasters that had not yet matured and whose administrative costs were high, we analyzed actual administrative costs for disaster declarations that were closed as of April 30, 2014. For declarations that were still open as of April 30, 2014, we analyzed actual obligations as of April 30, 2014, plus the amount that FEMA projected to obligate in the future until the declarations are eventually closed. To determine whether the IFMIS data were reliable, we reviewed the data that FEMA officials provided, discussed data quality control procedures with relevant FEMA officials, and reviewed documentation such as DHS audits that included IFMIS to ensure the integrity of the data. We determined that the IFMIS DRF data were sufficiently reliable for the purposes of this report. In addition, we obtained and analyzed FEMA policies, procedures, and guidance specific to FEMA’s administrative costs. We obtained FEMA’s Financial Information Tool for a small and large disaster to better understand FEMA’s categories of administrative costs. To determine what actions, if any, FEMA is taking to reduce the costs of delivering disaster assistance, we interviewed officials from FEMA’s Office of Chief Financial Officer and obtained and analyzed documentation to determine what, if any, internal standards FEMA utilizes to determine the reasonableness of its administrative costs and whether FEMA implemented administrative cost goals and tracks performance against the goals. We also interviewed three Federal Coordinating Officers to determine their ability to control costs and the respective outcomes. We also obtained and analyzed FEMA policies, procedures, and guidance specific to administrative costs, such as its guidance on costs to federal coordinating officers, FEMA’s Strategic Plans, and its Financial Management Code guide. We evaluated them using Standards for Internal Control in the Federal Government and project management guidance. We compared the intent of the criteria against FEMA’s policies/practices to control administrative costs, and determine the extent FEMA is meeting the intent of the criteria. To address our second objective, we obtained and analyzed data on DRF obligations to state and local governments for PA-related administrative costs for all major disasters declared by the President during fiscal years 2004 through 2013. Further, we calculated the dollar amount and percentage for each fiscal year in our scope to determine whether administrative costs increased or decreased during the period. We reviewed and analyzed FEMA policies, procedures, and guidance specific to state and local administrative costs, such as its Disaster Assistance Policy and associated memorandum and evaluated them using practices for good guidance. We analyzed administrative cost data from three of FEMA’s information technology systems that track financial data for disasters. We selected example transactions and obtained supporting documentation to better understand the types of administrative costs associated with major disaster declarations. To assess the impact of FEMA’s 2007 Management Costs interim final rule on administrative costs, we interviewed with the following FEMA regional officials and officials from selected states and localities either in- person or by teleconference within each region: Public Assistance Branch Chief or their designee in each of FEMA’s 10 regional offices. We visited three FEMA regions: Region II, located in New York, New York, which represents a large recent catastrophic disaster; Region IV located in Atlanta, Georgia, which represents all sizes of disasters (including catastrophic), has frequent disasters, and has a range of disaster types; and Region VII, located in Kansas City, Missouri, which represents many sizes of disasters, a frequent disaster area—a total of 84 out of 650 disasters in the past 10 fiscal years (2004 through 2013) — and a range of disaster types. State emergency management officials from 10 select states (grantees) who work in their states’ recovery offices and with FEMA on public assistance projects. Located in the three regions selected above Local officials (subgrantees) from 11 select localities that received or will receive FEMA PA assistance from the 10 select states. Senior FEMA officials from the Louisiana Recovery Office and the Sandy New Jersey and New York Recovery Offices. These offices are responsible a large number of public assistance projects. From these interviews, we obtained information on administrative costs reimbursed to grantees and subgrantees through the PA program, including section 324 management costs (management costs) and direct administrative costs. The information obtained from these states and localities cannot be generalized across all states and tribal nations. However, the information obtained from these states and localities provides a broad understanding of the issues grantees and subgrantees encounter during the disaster recovery process. In addition, to assess the effects of FEMA’s interim rule: We interviewed officials from three private sector companies from three of the select states that provide services to the states and localities in documenting and claiming administrative costs. We also reviewed first and second appeals related to direct administrative costs filed by grantees and subgrantees since FEMA implemented its 2007 Management Costs interim final rule. The second appeals information was centrally maintained by FEMA headquarters and posted on its webpage. We also inquired about appeals related to management costs and direct administrative costs during our regional interviews. We conducted this performance audit from November 2013 to December 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Fifty-seven jurisdictions received major disaster declarations during fiscal years 2004 through 2013. Oklahoma had the most, with 29 declarations, while Guam had 1. Table 7 identifies the number of disaster declarations for all jurisdictions during fiscal years 2004 through 2013. The Federal Emergency Management Agency (FEMA) obligates funds from the Disaster Relief Fund to help jurisdictions respond to and recover from declared disasters. FEMA classifies these funds into five categories: Public Assistance, Individual Assistance, Hazard Mitigation, Mission Assignments, and Administrative Costs. Table 8 shows the obligations for each category by jurisdiction. This appendix provides a definition of each of FEMA’s cost categories for its administrative costs. The cost categories within the table below comprised FEMA’s administrative costs during the period of our review— fiscal years 2004 through 2013. However, according to FEMA officials, in October 2014 the agency changed the cost categories that comprise administrative costs—for example, the urban search and rescue cost category is no longer included in administrative costs. Definition Includes gross compensation (before tax and other deductions) directly related to duties performed for the government by federal civilian employees, military personnel, and nonfederal personnel. This covers: additional compensation such as hazardous duty, night differential, holiday, standby, and overtime pay, cost-of-living allowance (COLA), and post differential; salaries for casual workers; payments to other agencies on reimbursable details; re-employed annuitants and rewards for information. Includes travel and transportation costs of government employees and other persons while in an authorized travel status, that are to be paid by the government either directly or by reimbursing the traveler. Costs of both travel away from official stations, subject to regulations governing civilian travel, and local travel and transportation of persons in and around the official station of an employee, rental or lease of vehicles for transportation of government employees or others necessary to carry out a disaster operation or related activities, rental or lease of vehicles from interagency motorpools (disaster-related or not), subsistence for travelers and reimbursement of actual expenses, and incidental expenses related to official travel, such as baggage transfer, telephone and telegraph expenses, fees for purchasing passports, travel checks, and use of Automatic Teller Machines (ATMs). Includes contractual obligations incurred for the transportation of things (animals included), for the care of such things while in process of being transported, and for other services incident to the transportation of things (e.g., lifts) by freight and express carriers. Includes rental of transportation equipment, such as U-Haul or Ryder trucks. Excludes transportation paid by a vendor for commodities purchased by government. Includes payments for the use of property owned by others and charges for communication and utility services. Excludes payments for rental of transportation equipment. Includes printing and duplicating, quick copy services, photostats, blueprints, photography, microfilming, and advertising performed by contractors, the Government Printing Office, other government agencies or units, or commercial printers and photographers. Includes all common processes of duplicating obtained on a contractual or reimbursable basis. Includes publication of notices, advertising, radio and television time, when done by contract. Also includes standard forms when specially printed or assembled to order and printed envelopes and letterheads. Includes personnel, equipment, and supplies for a mission assignment to another federal agency to provide administrative and logistical support to begin and maintain disaster operations. Excludes rental or lease of vehicles and activities performed by contractors. Includes advisory and assistance services contractors, purchases of goods and services from government accounts, operation and maintenance of facilities and equipment, medical care, research and development contracts, subsistence and support of persons, and services not otherwise classified. Includes commodities that are (1) ordinarily consumed or expended within 1 year after use, (2) converted in the process of construction or manufacture, or (3) used to form a minor part of equipment or fixed property, up to a cost of $25,000. Definition Includes the purchase of personal property that may normally be expected to have a period of service for a year or more after being put in use without material impairment of its physical condition or functional capacity. Includes purchase and improvement of land, buildings and other structures, non-structural improvements, and fixed equipment acquired under contract. Includes deployment of Urban Search and Rescue teams to stage for or respond to disasters. Includes payments to creditors for the use of moneys loaned, deposited, overpaid, or otherwise made available; the distribution of earnings to owners of trust or other funds, and interest payments under lease-purchase contracts for construction of buildings. Excludes the interest portion of the payment of claims when a contract has been delayed by the government. Includes payments of amounts previously collected by the government. Includes payments (1) to correct errors in computations, erroneous billing, and other factors and (2) to former employees or their beneficiaries for employee contributions to retirement and disability funds. FEMA generally defines a project as a logical grouping of work that will be funded as a unit. Under this definition, a project may cover work for one damage site (e.g., all of the damage to a single school) or for similar types of damage at various locations (e.g., all sewer pump stations in a city). To facilitate project review, approval, and funding, FEMA classifies PA projects as either small or large based on annually adjusted cost thresholds. FEMA funds small projects through a process known as Simplified Procedures that is intended to expedite the processing of grant funding by obligating funds for small projects based on FEMA’s approval of the project’s cost estimate. At the beginning of fiscal year 2014, the small project minimum and maximum thresholds were $1,000 and $68,500, respectively. If the estimated total project amount is between the minimum and maximum thresholds, the project is processed as a small project using Simplified Procedures. The Sandy Recovery Improvement Act of 2013 (SRIA) required FEMA to complete an analysis to determine whether or not an increase in the small project thresholds was appropriate. Subsequently, FEMA raised the minimum threshold to $3,000 and the maximum threshold to $120,000. These new thresholds, which will both be adjusted annually to reflect changes in the Consumer Price Index for all Urban Consumers published by the Department of Labor, apply to all projects for disasters declared on or after February 26, 2014. Prior to SRIA, only the maximum threshold was adjusted annually. Under Simplified Procedures, FEMA does not perform a final inspection of completed small projects, but will review, or validate, a sample of an applicant’s small projects to ensure that the project scope of work and damage assessment are complete and that all special considerations have been identified, among other things. After FEMA approves a project, funds are obligated—that is, they are made available—to the grantee, which, in turn, passes the funds along to subgrantees. Funding for small projects is generally fixed; however, FEMA may approve a cost estimate or scope of work for a project after project approval when new information comes to light. For example, if an applicant discovers that the actual costs for a project are higher than FEMA’s estimate, the applicant may apply to FEMA for additional funds. Before the disaster is closed, the grantee must certify that all such projects were completed in accordance with FEMA approvals and that the state contribution to the nonfederal share has been paid to each subgrantee. FEMA funds large projects, those with an estimated cost greater than the small project maximum threshold, based on actual documented costs. As with small projects, FEMA initially approves a cost estimate for a large project and obligates the federal share of the funds to the grantee. Funds are generally made available to the subgrantee on a progress payment basis as work is completed and actual costs are documented. When all work associated with the project is complete, a subgrantee must submit documentation to the grantee to account for all incurred costs. The grantee then determines the final cost of the eligible work and submits a report to FEMA certifying that the subgrantee’s costs were incurred in the completion of the eligible work. After reviewing the grantee’s report, FEMA reviews and may adjust, through obligation or deobligation, the final amount of the grant to reflect the actual cost of the eligible work. FEMA reimburses grantees and subgrantees for some expenses associated with administering PA grants. FEMA divides these reimbursements into two categories: section 324 management costs (management costs) and direct administrative costs. Management costs are any indirect costs, any administrative expense, and any other expense not directly chargeable to a specific project. Figure 10 describes the reimbursement process for management costs. Appendix VI: Small Disaster Example – Major Disaster Declaration (DR-1885) Kansas Severe Winter Storms and Snowstorm (Declared March 9, 2010) This appendix shows Federal Emergency Management Agency’s (FEMA) obligations for Public Assistance, Individual Assistance, Hazard Mitigation, Mission Assignment, and administrative costs for DR-1885. With total obligations of about $21.7 million, DR-1885 is classified as a small disaster. This includes approximately $4.1 million in FEMA’s administrative costs, $0.8 million in management costs reimbursed to the grantee, and $0.2 million in direct administrative costs reimbursed to both the grantee and subgrantees. Appendix VII: Large Disaster Example – Major Disaster Declaration (DR-4085): New York Hurricane Sandy (Declared October 30, 2012) This appendix shows Federal Emergency Management Agency’s (FEMA) obligations for Public Assistance, Individual Assistance, Hazard Mitigation, Mission Assignment, and administrative costs for DR-4085. With total obligations of $4.8 billion, DR-4085 is classified as a large disaster. This includes $482 million in FEMA administrative costs, $0 in management costs reimbursed to the grantee, and $10.9 million in direct administrative costs reimbursed to the grantee and subgrantees of Public Assistance grants. Prior to the 2007 rule, the Federal Emergency Management Agency (FEMA) used several mechanisms to reimburse grantees for costs associated with administering Public Assistance (PA) grants. Additionally, both grantees and subgrantees were eligible to receive a sliding-scale administrative allowance to cover costs incurred in preparing project worksheets, validating small projects, preparing final inspection reports, quarterly reports, and final audits, and making related field inspections by state employees, including overtime pay and per diem and travel expenses, but not including regular time for such employees. For grantees, the amount of reimbursement was based on a percentage of the total amount of assistance provided (federal share) for all eligible subgrantees in the state. In addition, a subgrantee could be reimbursed to cover necessary costs of requesting, obtaining, and administering federal disaster assistance subgrants. For subgrantees, the amount of reimbursement was based on a percentage of net eligible costs. In addition to the contact named above, Edward George, Assistant Director; David Alexander; Aditi Archer; Andrew Berglund; Jeffrey Fiore; Eric Hauswirth; Tracey King; Anne Kruse; Jessica Orr; Jim Ungvarsky; and Samuel Woo made key contributions to this report.
FEMA leads federal efforts to respond to and recover from disasters, and provides grants to states and localities through the DRF. For each major disaster, funds can be obligated from the DRF to cover administrative costs—the costs of providing and managing disaster assistance—for FEMA, states, tribes, localities, and certain nonprofits, among others. GAO was asked to review these administrative costs along with FEMA policy changes. This report addresses the extent to which DRF funds were obligated to cover (1) FEMA's administrative costs for major disasters during fiscal years 2004 through 2013, and the steps FEMA has taken to control these costs, and (2) Grantee and subgrantee administrative costs for PA grants, and the impact FEMA's 2007 policy changes had on PA program administrative costs reimbursements. GAO analyzed FEMA's administrative costs data and policies and PA guidance for administrative cost reimbursements; and interviewed FEMA, state, and local officials. The Federal Emergency Management Agency (FEMA) obligated $12.7 billion from the Disaster Relief Fund (DRF) for its administrative costs from fiscal years 2004 through 2013 and has taken some steps to reduce and better control these costs. This $12.7 billion represents 13 percent of the $95.2 billion obligated from the DRF for the 650 major disasters declared during this time frame. FEMA's average administrative cost percentages for major disasters during the 10 fiscal years 2004 to 2013 doubled the average during the 10 fiscal years 1989 to 1998. FEMA recognized that administrative costs have increased and has taken steps intended to better control and reduce these costs, such as setting a goal in its recent strategic plan to lower these costs, and creating administrative cost targets. However, FEMA does not require these targets be met, and GAO found that had FEMA met its targets, administrative costs could have been reduced by hundreds of millions of dollars. GAO also found that FEMA lacks an integrated plan with time frames and milestones to hold senior officials accountable for achieving its goals to reduce and more effectively control costs. Such a plan could help FEMA to better oversee and control these costs. In addition, GAO found that FEMA does not track administrative costs by major disaster program, such as Individual or Public Assistance, and has not assessed the costs versus the benefits of tracking such information. Doing so could provide FEMA with better information to manage these costs. From fiscal years 2004 through 2013, FEMA obligated $1.7 billion to reimburse grantees (states) and subgrantees (localities) for administrative costs related to Public Assistance (PA) grants, and its 2007 policy change has led to additional complexity and workload for FEMA and its grantees. FEMA's 2007 rule was intended to simplify and clarify the method FEMA uses to reimburse grantees and subgrantees for certain costs incurred for administering PA grants. However, according to FEMA, the 2007 rule led to an unexpectedly high rate of claims for direct administrative costs. Grantee, subgrantee, and FEMA officials told GAO that FEMA policies and guidance do not adequately specify the requirements for determining reasonableness, eligibility, and supporting documentation to support reimbursement of direct administrative costs. Clarifying the agency's guidance and minimum documentation requirements would help grantees and subgrantees submit, and FEMA review requests for administrative costs reimbursement. GAO recommends that FEMA (1) develop an integrated plan to better control and reduce its administrative costs for major disasters, (2) assess the costs versus the benefits of tracking FEMA administrative costs by DRF program, and (3) clarify the agency's guidance and minimum documentation requirements for direct administrative costs. FEMA agreed with the report and its recommendations.
Information security is a critical consideration for any organization that depends on information systems and computer networks to carry out its mission or business. Of particular importance is the security of information and systems supporting critical infrastructures—physical or virtual systems and assets so vital to the nation that their incapacitation or destruction would have a debilitating impact on national and economic security and on public health and safety. Although the majority of our nation’s critical infrastructures are owned by the private sector, the federal government owns and operates key facilities that use control systems, including oil, gas, water, electricity, and nuclear facilities. In the electric power industry, control systems can be used to manage and control the generation, transmission, and distribution of electric power. For example, control systems can open and close circuit breakers and set thresholds for preventive shutdowns. Critical infrastructure control systems face increasing risks due to cyber threats, system vulnerabilities, and the potential impact of attacks as demonstrated by reported incidents. Control systems are more vulnerable to cyber threats and unintended incidents now than in the past for several reasons, including their increasing standardization and connectivity to other systems and the Internet. For example, in August 2006, two circulation pumps at Unit 3 of the Browns Ferry, Alabama, nuclear power plant operated by TVA failed, forcing the unit to be shut down manually. The failure of the pumps was traced to an unintended incident involving excessive traffic on the control system’s network. To address this increasing threat to control systems governing critical infrastructures, both federal and private organizations have begun efforts to develop requirements, guidance, and best practices for securing those systems. For example, FISMA outlines a comprehensive risk-based approach to securing federal information systems, which include control systems. Federal organizations, including the National Institute of Standards and Technology (NIST), the Federal Energy Regulatory Commission (FERC), and the Nuclear Regulatory Commission (NRC), have used a risk-based approach to develop guidance and standards to secure control systems. NIST guidance has been developed that currently applies to federal agencies; however, much of the guidance and standards developed by FERC and NRC has not yet been finalized. Once implemented, FERC and NRC standards will apply to both public and private organizations that operate covered critical infrastructures. The TVA is a federal corporation and the nation’s largest public power company. TVA’s power service area includes almost all of Tennessee and parts of Mississippi, Kentucky, Alabama, Georgia, North Carolina, and Virginia. It operates 11 coal-fired fossil plants, 8 combustion turbine plants, 3 nuclear plants, and a hydroelectric system that includes 29 hydroelectric dams and one pumped storage facility. TVA also owns and operates one of the largest transmission systems in North America. Control systems are essential to TVA’s operation because it uses them to both generate and deliver power. To generate power, control systems are used within power plants to open and close valves, control equipment, monitor sensors, and ensure the safe and efficient operation of a generating unit. Many control systems networks connect with other agency networks to transmit system status information. To deliver power, TVA monitors the status of its own and surrounding transmission facilities from two operations centers. TVA had not fully implemented appropriate security practices to secure the networks on which its control systems rely. Specifically, the interconnected corporate and control systems networks at certain facilities that we reviewed did not have sufficient information security safeguards in place to adequately protect control systems. In addition, TVA did not always implement controls adequate to restrict physical access to control system areas and to protect these systems—and their operators— from fire damage or other hazards. As a result TVA, control systems were at increased risk of unauthorized modification or disruption by both internal and external threats. Multiple weaknesses within the TVA corporate network left it vulnerable to potential compromise of the confidentiality, integrity, and availability of network devices and the information transmitted by the network. For example: Almost all of the workstations and servers that we examined on the corporate network lacked key security patches or had inadequate security settings. TVA hd not effectively configured host firewall controls on laptop computers we reviewed, and one remote access system that we reviewed had not been securely configured. Network services had been configured across lower and higher-security network segments, which could allow a malicious user to gain access to sensitive systems or modify or disrupt network traffic. TVA’s ability to use its intrusion detection system to effectively monitor its network was limited. The access controls implemented by TVA did not adequately secure its control systems networks and devices, leaving the control systems vulnerable to disruption by unauthorized individuals. For example: TVA had implemented firewalls to segment control systems networks from the corporate network. However, the configuration of certain firewalls limited their effectiveness. The agency did not have effective passwords or other equivalent documented controls to restrict access to the control systems we reviewed. According to agency officials, passwords were not always technologically possible to implement, but in the cases we reviewed there were no documented compensating controls. TVA had not installed current versions of patches for key applications on computers on control systems networks. In addition, the agencywide policy for patch management did not apply to individual plant-level control systems. Although TVA had implemented antivirus software on its transmission control systems network, it had not consistently implemented antivirus software on other control systems we reviewed. TVA had not consistently implemented physical security controls at several facilities that we reviewed. For example: Live network jacks connected to TVA’s internal network at certain facilities we reviewed had not been adequately secured from unauthorized access. At one facility, sufficient emergency lighting was not available, a server room had no smoke detectors, and a control room contained a kitchen (a potential fire and water hazard). The agency had not always ensured that access to sensitive computing and industrial control systems resources had been granted to only those who needed it to perform their jobs. At one facility, about 75 percent of facility badgeholders had access to a plant computer room, although the vast majority of these individuals did not need access. Officials stated that all of those with access had been through the required background investigation and training process. Nevertheless, an underlying principle for secure computer systems and data is that users should be granted only those access rights and permissions needed to perform their official duties. An underlying reason for TVA’s information security control weaknesses was that it had not consistently implemented significant elements of its information security program, such as: documenting a complete inventory of systems; assessing risk of all systems identified; developing, documenting, and implementing information security policies and procedures; and documenting plans for security of control systems as well as for remedial actions to mitigate known vulnerabilities. As a result of not fully developing and implementing these elements of its information security program, TVA had limited assurance that its control systems were adequately protected from disruption or compromise from intentional attack or unintentional incident. TVA’s inventory of systems did not include all of its control systems as required by agency policy. In its fiscal year 2007 FISMA submission, TVA included the transmission and the hydro automation control systems in its inventory. However, the plant control systems at its nuclear and fossil facilities had not been included in the inventory. At the conclusion of our review, agency officials stated they planned to develop a more complete and accurate system inventory by September 2008. TVA had not completed categorizing risk levels or assessing the risks to its control systems. FISMA mandates that agencies assess the risk and magnitude of harm that could result from the unauthorized access, use, disclosure disruption, modification, or destruction of their information and information systems. However, while the agency had categorized the transmission and hydro automation control systems as high-impact systems, its nuclear division and fossil business unit, which includes its coal and combustion turbine facilities, had not assigned risk levels to its control systems. TVA had also not completed risk assessments for the control systems at its hydroelectric, nuclear, coal, and combustion turbine facilities. According to TVA officials, the agency plans to complete the hydroelectric and nuclear control systems risk assessments by June 2008 and they plan to complete the security categorization of remaining control systems throughout TVA by September 2008, except for fossil systems, for which no date has been set. Several shortfalls in the development, documentation, and implementation of TVA’s information security policies contributed to many of the inadequacies in TVA’s security practices. For example: TVA had not consistently applied agencywide information security policies to its control systems, and TVA business unit security policies were not always consistent with agencywide information security policies. Cyber security responsibilities for interfaces between TVA’s transmission control system and its hydroelectric and fossil generation units had not been documented. Physical security standards for control system sites had not been finalized or were in draft form. Weaknesses in TVA’s patch management process hampered the efforts of TVA personnel to identify, prioritize, and install critical software security patches to TVA systems in a timely manner. For a 15-month period, TVA documented its analysis of 351 reported vulnerabilities, while NIST’s National Vulnerability Database reported about 2,000 vulnerabilities rated as high or medium risk for the types of systems in operation at TVA for the same time period. In addition, upon release of a patch by the software vendor, the agency had difficulty in determining the patch’s applicability to the software applications in use at the agency because it did not have a mechanism in place to provide timely access to software version and configuration information for the applications. Furthermore, TVA’s written guidance on patch management provided only limited guidance on how to prioritize vulnerabilities. The guidance did not refer to the criticality of IT resources or specify situations in which it was acceptable to upgrade or downgrade a vulnerability’s priority from that given by its vendors or third- party patch tracking services. For example, agency staff had reduced the priority of three vulnerabilities identified as critical or important by the vendor or a patch tracking service and did not provide sufficient documentation of the basis for this decision. As a result, patches that were identified as critical were not applied in a timely manner; in some cases, a patch was applied more than 6 months past TVA deadlines for installation. TVA had not developed system security or remedial action plans for all control systems as required under federal law and guidance. Security plans document the system environment and the security controls selected by the agency to adequately protect the system. Remedial action plans document and track activities to implement missing controls such as missing system security plans and other corrective actions necessary to mitigate vulnerabilities in the system. Although TVA had developed system security and remedial action plans for its transmission control system, it had not done so for control systems at the hydroelectric, nuclear, or fossil facilities. According to agency officials, TVA plans to develop a system security plan for its hydroelectric automation and nuclear control systems by June 2008, but no time frame has been set to complete development of a security plan for control systems at fossil facilities. Until the agency documents security plans and implements a remediation process for all control systems, it will not have assurance that the proper controls will be applied to secure control systems or that known vulnerabilities will be properly mitigated. Numerous opportunities exist for TVA to improve the security of its control systems. Specifically, strengthening logical access controls over agency networks can better protect the confidentiality, integrity, and availability of control systems from compromise by unauthorized individuals. In addition, fortifying physical access controls at its facilities can limit entry to TVA restricted areas to only authorized personnel, and enhancing environmental safeguards can mitigate losses due to fire or other hazards. Further, establishing an effective information security program can provide TVA with a solid foundation for ensuring the adequate protection of its control systems. Because of the interconnectivity between TVA’s corporate network and certain control systems networks, we recommend that TVA implement effective patch management practices, securely configure its remote access system, and appropriately segregate specific network services. We also recommend that the agency take steps to improve the security of its control systems networks, such as implementing strong passwords or equivalent authentication mechanisms, implementing antivirus software, restricting firewall configuration settings, and implementing equivalent compensating controls when such steps cannot be taken. To prevent unauthorized physical access to restricted areas surrounding TVA’s control systems, we recommend that the agency take steps to toughen barriers at points of entry to these facilities. In addition, to protect TVA’s control systems operators and equipment from fire damage or other hazards, we also recommend that the agency improve environmental controls by enhancing fire suppression capabilities and physically separating cooking areas from system equipment areas. Finally, to improve the ability of TVA’s information security program to effectively secure its control systems, we are recommending that the agency improve its configuration management process and enhance its patch management policy. We also recommend that TVA complete a comprehensive system inventory that identifies all control systems, perform risk assessments and security risk categorization of these systems, and document system security and remedial action plans for these systems. Further, we recommend improvements to agency information security policies. In commenting on drafts of our reports, TVA concurred with all of our recommendations regarding its information security program and the majority of our recommendations regarding specific information security weaknesses. The agency agreed on the importance of protecting critical infrastructures and stated that it has taken several actions to strengthen information security for control systems, such as centralizing responsibility for cyber security within the agency. It also provided information on steps the agency was taking to implement certain GAO recommendations. In summary, TVA’s power generation and transmission critical infrastructures are important to the economy of the southeastern United States and the safety, security, and welfare of millions of people. Control systems are essential to the operation of these infrastructures; however, multiple information security weaknesses exist in both the agency’s corporate network and individual control systems networks and devices. An underlying cause for these weaknesses is that the agency had not consistently implemented its information security program throughout the agency. If TVA does not take sufficient steps to secure its control systems and implement an information security program, it risks not being able to respond properly to a major disruption that is the result of an intended or unintended cyber incident. Mr. Chairman, this concludes our statement. We would be happy to answer questions at this time. If you have any questions regarding this testimony, please contact Gregory C. Wilshusen, Director, Information Security Issues, at (202) 512-6244 or wilshuseng@gao.gov, or Nabajyoti Barkakati, Acting Chief Technologist, at (202) 512-4499 or barkakatin@gao.gov. Other key contributors to this testimony include Nancy DeFrancesco and Lon Chin (Assistant Directors); Angela Bell; Bruce Cain; Mark Canter; Heather Collins; West Coile; Kirk Daubenspeck; Neil Doherty; Vijay D’Souza; Nancy Glover; Sairah Ijaz; Myong Kim; Stephanie Lee; Lee McCracken; Duc Ngo; Sylvia Shanks; John Spence; and Chris Warweg. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The control systems that regulate the nation's critical infrastructures face risks of cyber threats, system vulnerabilities, and potential attacks. Securing these systems is therefore vital to ensuring national security, economic well-being, and public health and safety. While most critical infrastructures are privately owned, the Tennessee Valley Authority (TVA), a federal corporation and the nation's largest public power company, provides power and other services to a large swath of the American Southeast. GAO was asked to testify on its public report being released today on the security controls in place over TVA's critical infrastructure control systems. In doing this work, GAO examined the security practices in place at TVA facilities; analyzed the agency's information security policies, plans, and procedures in light of federal law and guidance; and interviewed agency officials responsible for overseeing TVA's control systems and their security. TVA had not fully implemented appropriate security practices to secure the control systems used to operate its critical infrastructures at facilities GAO reviewed. Multiple weaknesses within the TVA corporate network left it vulnerable to potential compromise of the confidentiality, integrity, and availability of network devices and the information transmitted by the network. For example, almost all of the workstations and servers that GAO examined on the corporate network lacked key security patches or had inadequate security settings. Furthermore, TVA did not adequately secure its control system networks and devices on these networks, leaving the control systems vulnerable to disruption by unauthorized individuals. Network interconnections provided opportunities for weaknesses on one network to potentially affect systems on other networks. For example, weaknesses in the separation of network segments could allow an individual who gained access to a computing device connected to a less secure portion of the network to compromise systems in a more secure portion of the network, such as the control systems. In addition, physical security at multiple locations that GAO reviewed did not sufficiently protect the control systems. For example, live network jacks connected to TVA's internal network at certain facilities GAO reviewed had not been adequately secured from unauthorized access. As a result, TVA's control systems were at increased risk of unauthorized modification or disruption by both internal and external threats. An underlying reason for these weaknesses was that TVA had not consistently implemented significant elements of its information security program. For example, the agency lacked a complete and accurate inventory of its control systems and had not categorized all of its control systems according to risk, limiting assurance that these systems are adequately protected. In addition, TVA's patch management process lacked a mechanism to effectively prioritize vulnerabilities. As a result, patches that were identified as critical, meaning they should be applied immediately to vulnerable systems, were not applied in a timely manner. Numerous opportunities exist for TVA to improve the security of its control systems. For example, TVA can strengthen logical access controls, improve physical security, and fully implement its information security program. If TVA does not take sufficient steps to secure its control systems and fully implement an information security program, it risks not being able to respond properly to a major disruption that is the result of an intended or unintended cyber incident.
Operation Desert Storm demonstrated that the U.S. military and other allied forces have limited capability against theater ballistic missiles. In fact, U.S. defensive capability is limited to weapons that defend against missiles nearing the end of their flight, such as the Patriot. No capability currently exists to destroy missiles in the boost phase. Consequently, DOD is expending considerable resources to develop the ABL’s capability to intercept missiles in their boost phase. In simple terms, the ABL program will involve placing various components, including a powerful multimegawatt laser, a beam control system, and related equipment, in a Boeing 747-400 aircraft and ensuring that all the components work together to detect and destroy enemy missiles in their boost phase. In November 1996, the Air Force awarded a 77-month program definition and risk reduction contract to the team of Boeing, TRW, and Lockheed Martin. Under the contract, Boeing is to produce and modify the 747-400 aircraft and integrate the laser and the beam control system with the aircraft, TRW will develop the multimegawatt Chemical Oxygen Iodine Laser (COIL) and ground support systems, and Lockheed Martin will develop the beam control system. The various program components are in the early phases of design and testing. One prototype ABL will be produced and used in 2002 to shoot down a missile in its boost phase. If this demonstration is successful, the program will move into the engineering and manufacturing development phase in 2003. Production is scheduled to begin about 2005. Initial operational capability of three ABLs is scheduled for 2006; full operational capability of seven ABLs is scheduled for 2008. The ABL is a complex laser weapon system that is expected to detect an enemy missile shortly after its launch, track the missile’s path, and destroy the missile by holding a concentrated laser beam on it until the beam’s heat causes the pressurized missile casing to crack, in turn causing the missile to explode and the warhead to fall to earth well short of its intended target. The ABL’s opportunity to shoot down a missile lasts only from the time the missile has cleared the cloud tops until its booster burns out. That interval can range from 30 to 140 seconds, depending on missile type. During that interval, the ABL is expected to detect, track, and destroy the missile, as shown in figure 1. The first step—detection—is to begin when the ABL’s infrared search sensor detects a burst of heat that could be fire from a missile’s booster.Because clouds block the view of the infrared search sensor, the sensor cannot detect this burst of heat until the missile has broken through the cloud tops—assumed to be at about 38,500 feet. The sensor detects the heat burst about 2 seconds after the missile has cleared the cloud tops. (In the absence of clouds, detection can occur earlier.) The ABL would then use information from the sensor to verify that the heat burst is the plume of a missile in its boost phase and would then move the telescope located in the nose of the aircraft toward the coordinates identified by the infrared sensor. The second step—tracking—is to be performed sequentially and with increasing precision by several ABL devices. The first of these tracking devices, the acquisition sensor, is to take control of the telescope, center the plume in the telescope’s field of view, and hand off that information to the next device, the plume tracker. The plume tracker, having taken control of the telescope, is to track and determine the shape of the missile plume and use this information to estimate the location of the missile’s body and project a beam from the track illuminator laser to light up the nose cone of the missile. The plume tracker is then to hand its information, and control of the telescope, to the final tracking device, the fine tracker. The fine tracker is to measure the effects of turbulence and determine the aimpoint for the beacon laser and, ultimately, for the COIL laser. The reflected light from the illuminator laser provides information that is to be used to operate a sophisticated mirror system (known as a fast-steering mirror) that helps to compensate for optical turbulence by stabilizing the COIL beam on the target. The reflected light from the beacon laser provides information that is to be used to operate deformable mirrors that will further compensate for turbulence by shaping the COIL beam. With the illuminator and beacon lasers still operating, the fine tracker is to determine the aimpoint for the COIL laser. The COIL laser is to be brought to full power and focused on the aimpoint. At this point, the final step in the sequence—missile destruction—is to begin. During this final step, a lethal laser beam is held on the missile. The length of time that the beam must dwell on the missile will depend on turbulence levels and the missile type, hardness, range, and altitude. Throughout the lethal dwell, the illuminator and beacon lasers are to continue to operate, providing the information to operate the fast-steering and deformable mirrors. Under the intense heat of the laser beam, which is focused on an area about the size of a basketball, the missile’s pressurized casing fractures, and then explodes, destroying the missile. The ABL is expected to operate from a central base in the United States and be available to be deployed worldwide. The program calls for a seven-aircraft fleet, with five aircraft to be available for operational duty at any given time. The other two aircraft are to be undergoing modifications or down for maintenance or repair. When the ABLs are deployed, two aircraft are to fly, in figure-eight patterns, above the clouds at about 40,000 feet. Through in-flight refueling, which is to occur between 25,000 and 35,000 feet, and rotation of aircraft, two ABLs will always be on patrol, thus ensuring 24-hour coverage of potential missile launch sites within the theater of operations. The ABLs are intended to operate about 90 kilometers behind the front line of friendly troops but could move forward once air superiority has been established in the theater of operations. When on patrol, the ABLs are to be provided the same sort of fighter and/or surface-to-air missile protection provided to other high-value air assets, such as the Airborne Warning and Control System and the Joint Surveillance Target Attack Radar System. A key factor in determining whether the ABL will be able to successfully destroy a missile in its boost phase is the Air Force’s ability to predict the levels of turbulence that the ABL is expected to encounter. Those levels are needed to define the ABL’s technical requirements for turbulence. To date, the Air Force has not shown that it can accurately predict the levels of turbulence the ABL is expected to encounter or that its technical requirements regarding turbulence is appropriate. The type of turbulence that the ABL will encounter is referred to as optical turbulence. It is caused by temperature variations in the atmosphere. These variations distort and reduce the intensity of the laser beam. Optical turbulence can be measured either optically on non-optically. Optical measurements are taken by transmitting laser beams from one aircraft to instruments on board another aircraft at various altitudes and distances. Non-optical measurements of turbulence are taken by radar or by temperature probes mounted on balloons or on an aircraft’s exterior. The Air Force’s ABL program office has not determined whether non-optical measurements of turbulence can be mathematically correlated with optical measurements. Without demonstrating that such a correlation exists, the program office cannot ensure that the non-optical measurements of turbulence that it is collecting are useful in predicting the turbulence likely to be encountered by the ABL’s laser beam. Concern about turbulence measurements was expressed by a DOD oversight office nearly 1 year ago. In November 1996, during its milestone 1 review of the ABL program, the Defense Acquisition Board directed the program office to develop a plan for gathering additional data on optical turbulence and present that plan to a senior-level ABL oversight team for approval. The Board also asked the program office to “demonstrate a quantifiable understanding of the range and range variability due to optical turbulence and assess operational implications.” This requirement was one of several that the Air Force has been asked to meet before being granted the authority to proceed with development of the ABL. That authority-to-proceed decision is scheduled for June 1998. In February 1997, the program office presented to the oversight team a plan for gathering only non-optical data. The oversight team accepted the plan but noted concern that the plan was based on a “fundamental assumption” of a correlation between non-optical and optical measurements. If that assumption does not prove to be accurate, according to the oversight team, the program office will have to develop a new plan to gather more relevant (i.e., optical rather than non-optical) measurements. Accordingly, the oversight team required that the program office include in its data-gathering plan a statement agreeing to demonstrate the correlation between the non-optical and optical measurements. Program officials said they plan to demonstrate that correlation in the summer of 1997. To establish that a correlation exists, the program office plans to use optical and non-optical turbulence measurements taken during a 1995 Air Force project known as Airborne Laser Extended Atmospheric Characterization Experiment (ABLE ACE). Optical measurements were made by transmitting two laser beams from one aircraft to instruments aboard another aircraft at distances from 13 to 198 kilometers and at altitudes from 39,000 to 46,000 feet. These measurements provided the data used to calculate the average turbulence strengths encountered by the beams over these distances. The ABLE ACE project also took non-optical measurements of turbulence using temperature probes mounted on the exterior of one of the aircraft. Rather than taking measurements over the path of a laser beam between two aircraft, as with the optical measurements, the probes measured temperature variations of the air as the aircraft flew its route. Opinions vary within DOD about whether a correlation between optical and non-optical turbulence measurements can be established. Some atmospheric experts, who are members of the program office’s Working Group on Atmospheric Characterization, criticized the program office’s plan for collecting additional atmospheric data because it did not include additional optical measurements. Minutes from a Working Group meeting indicated that some of these experts believed that “current scientific understanding is far too immature” to predict optical effects from non-optical point measurements. In contrast, the chief scientist for the ABL program said it would be surprising if the two measurements were not directly related; he added that evaluations at specific points in the ABLE ACE tests have already indicated a relationship. According to the chief scientist, it would be prudent for the program office to continue to collect non-optical data while it completes its in-depth analysis of the ABLE ACE data. According to a DOD headquarters official, because the ABL is an optical weapon, gathering non-optical data without first establishing their correlation to optical data is risky. The official concluded that, if the program office cannot establish this correlation, turbulence data will have to be gathered through optical means. The ABL program office also has not shown that the turbulence levels in which the ABL is being designed to operate are realistic. Available optical data on optical turbulence indicate that the turbulence the ABL may encounter could be four times greater than the design specifications. These higher levels of optical turbulence would decrease the effective range of the ABL system. The ABL program office set the ABL’s design specifications for optical turbulence at a level twice that, according to a model, the ABL would likely encounter at its operational altitude. This model was based on research carried out in 1984 for the ground based laser/free electron laser program, in which non-optical measurements were taken by 12 balloon flights at the White Sands Missile Range in New Mexico. Each of the 12 flights took temperature measurements at various altitudes. These measurements were then used to develop a turbulence model that the program office refers to as “clear 1 night.” The clear 1 night model shows the average turbulence levels found at various altitudes. The ABL is being designed to operate at about 40,000 feet, so the turbulence expected at that level became the starting point for setting the design specifications. To ensure that the ABL would operate effectively at the intended ranges, for design purposes, the program office doubled the turbulence levels indicated by its clear 1 night model. The program office estimated that the ABL could be expected to encounter turbulence at or below that level 85 percent of the time. This estimate was based on the turbulence measured by 63 balloon flights made at various locations in the United States during the 1980s. When the ABL design specifications were established, the program office had very little data on turbulence. However, more recent data, accumulated during the ABLE ACE program, indicated that turbulence levels in many areas were much greater than those the ABL is being designed to handle. According to DOD officials, if such higher levels of turbulence are encountered, the effective range of the ABL system would decrease, and the risk that the ABL system would be underdesigned for its intended mission would increase. DOD officials also indicated that a more realistic design may not be achievable using current state-of-the-art technology. ABLE ACE took optical measurements in various parts of the world, including airspace over the United States, Japan, and Korea. According to the program office and Office of the Secretary of Defense (OSD) analyses of optical measurements taken during seven ABLE ACE missions, overall turbulence levels exceeded the design specifications 50 percent of the time. For the two ABLE ACE missions flown over Korea, the measurements indicated turbulence of up to four times the design specifications. Additionally, according to officials in OSD, ABLE ACE data were biased toward benign, low-turbulent, nighttime conditions. According to these officials, turbulence levels may be greater in the daytime. Developing and integrating a weapon-level laser, a beam control system, and the many associated components and software systems into an aircraft are unprecedented challenges for DOD. Although DOD has integrated a weapon-level laser and beam control system on the ground at White Sands Missile Range, it has not done so in an aircraft environment. Therefore, it has not had to contend with size and weight restrictions, motion and vibrations, and other factors unique to an aircraft environment. The COIL is in the early development stage. The Air Force must build the laser to be able to contend with size and weight restrictions, motion and vibrations, and other factors unique to an aircraft environment, yet be powerful enough to sustain a killing force over a range of at least 500 kilometers. It is to be constructed in a configuration that links modules together to produce a single high-energy beam. The laser being developed for the program definition and risk reduction phase will have six modules. The laser to be developed for the engineering and manufacturing development phase of the program will have 14 modules. To date, one developmental module has been constructed and tested. Although this developmental module exceeded its energy output requirements, it is too heavy and too large to meet integration requirements. The module currently weighs about 5,535 pounds and must be reduced to about 2,777 pounds. The module’s width must also be reduced by about one-third. To accomplish these reductions, many components of the module may have to be built of advanced materials, such as composites. The ABL aircraft, a Boeing 747-400 Freighter, will require many modifications to allow integration of the laser, beam control system, and other components. A significant modification is the installation of the beam control turret in the nose of the aircraft. The beam control turret is to be used for acquisition, tracking, and pointing actions used in destroying a missile. Consequently, the location of the turret is critical to the success of the ABL. Issues associated with the turret include the decreased aircraft performance resulting from the additional drag on the aircraft; the interaction of the laser beam with the atmosphere next to the turret, which can cause the laser beam to lose intensity; and vibrations from the operation of the aircraft that affect the accuracy of pointing the beam control turret. The contractor has conducted wind tunnel tests of these expected effects for three different turret locations and found that installing the turret in the nose of the aircraft would cause the fewest negative effects. However, the operational effectiveness of the beam control turret will not be known until it undergoes additional testing in 2002 in an operationally realistic environment. The laser exhaust system is another critical modification. The system must prevent the hot corrosive laser exhaust from damaging the bottom of the aircraft and other structural components made of conventional aluminum. The exhaust created by the laser will reach about 500 degrees Fahrenheit when it is ejected through the laser exhaust system on the bottom of the aircraft. This exhaust system must also undergo additional testing on the aircraft in 2002 to determine its operational effectiveness. Integrating the beam control system with the aircraft also poses a challenge for the Air Force. The Air Force must create a beam control system, consisting of complex software programs, moving telescopes, and sophisticated mirrors, that will compensate for the optical turbulence in which the system is operating and control the direction and size of the laser beam. In addition, the beam control system must be able to tolerate the various kinds of motions and vibrations that will be encountered in an aircraft environment. In deciding the on-board location of the beam control system’s components, the Air Force used data gathered by an extensive study of aircraft vibrations on the 747-400 Freighter. The beam control components are expected to be located in those areas of the aircraft that experience less intense vibrations and, to the extent possible, be shielded from vibrations and other aircraft motion. To date, the Air Force has not demonstrated how well a beam control system of such complexity can operate on an aircraft. The contractor has modeled the ABL’s beam control system on a brassboard but has not tested it on board an aircraft. The ABL program is a revolutionary weapon system concept. Although DOD has a long history with laser technologies, the ABL is its first attempt to design, develop, and install a multimegawatt laser on an aircraft. As such, the concept faces a number of technological challenges. A fundamental challenge is for the Air Force to accurately and reliably predict the level of optical turbulence that the ABL will encounter and then design the system to operate effectively in that turbulence. The Air Force will not have resolved that challenge until it has demonstrated whether there is a reliable correlation between its non-optical and optical turbulence measurements, or, should such a correlation not exist, gather additional optical data, which may delay the ABL program. Whether relevant and reliable data are confirmed through correlation or by additional optical measurements, the data are critical in assessing the appropriateness of the design specifications for turbulence. If the specifications need to be set higher, that should be done as soon as possible. Therefore, we recommend that the Secretary of Defense direct the Secretary of the Air Force to take the following actions: Demonstrate as quickly as possible, but no later than the time when DOD decides whether to grant the ABL program the authority to proceed (currently scheduled for June 1998), the existence of a correlation between the optical and non-optical turbulence data. If a correlation between optical and non-optical data cannot be established, the Air Force should be required to gather additional optical data to accurately predict the turbulence levels the ABL may encounter, before being given the authority to proceed with the program as planned. Validate the appropriateness of the design specification for turbulence based on reliable data that are either derived from a correlation between optical and non-optical data or obtained through the collection of additional optical data. DOD concurred with both of our recommendations. DOD’s comments are reprinted in appendix I. DOD also provided technical comments that we incorporated in this report where appropriate. We reviewed and analyzed DOD, Air Force, ABL program office, and contractor documents and studies regarding various aspects of the ABL program. We discussed the ABL program with officials of the Office of the Under Secretary of Defense (Comptroller); the Office of the Under Secretary of Defense (Acquisition and Technology); the Air Combat Command; the ABL program office; the Air Force’s Phillips Laboratory; and the ABL Contractor team of Boeing, TRW, and Lockheed Martin. We also discussed selected aspects of the ABL program with a consultant to the ABL program office. We conducted our review from September 1996 to August 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to the congressional committees that have jurisdiction over the matters discussed and to the Secretary of Defense; the Secretary of the Air Force; and the Director, Office of Management and Budget. We will make copies available to others on request. Please contact me at (202) 512-4841 if you or your staff have questions concerning this report. Major contributors to this report were Steven Kuhta, Ted Baird, Suzanne MacFarlane, and Rich Horiuchi. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the status of the Airborne Laser (ABL) program, focusing on: (1) the way in which the ABL is expected to change theater missile defense; (2) assurances that the ABL will be able to operate effectively in the levels of optical turbulence that may be encountered in the geographical areas in which the system might be used; and (3) the technical challenges in developing an ABL system that will be compatible with the unique environment of an aircraft. GAO noted that: (1) the ABL program is the Department of Defense's (DOD) first attempt to design, develop, and install a multimegawatt laser on an aircraft and is expected to be DOD's first system to intercept missiles during the boost phase; (2) a key factor in determining whether the ABL will be able to successfully destroy a missile in its boost phase is the Air Force's ability to predict the levels of turbulence that the ABL is expected to encounter; (3) the Air Force has not shown that it can accurately predict the levels of turbulence the ABL is expected to encounter or that its technical requirements regarding turbulence are appropriate; (4) because ABL is an optical weapons system, only optical measurements can measure the turbulence that will actually be encountered by the ABL laser beam; (5) the Air Force has no plans to take additional optical measurements and instead plans to take additional non-optical measurements to predict the severity of optical turbulence the ABL will encounter; (6) to ensure that the non-optical measurements can be validly applied to the ABL program, the Air Force must determine whether the non-optical measurements can be correlated to optical measurements; (7) until the Air Force can verify that its predicted levels of optical turbulence are valid, it will not be able to validate the ABL's design specifications for overcoming turbulence; (8) the Air Force has established a design specification for the ABL that is based on modelling techniques; (9) data collected by the program office indicate that the levels of turbulence that ABL may encounter could be four times greater than the levels in which the system is being designed to operate; (10) DOD officials indicated that a more realistic design may not be achievable using a current state-of-the-art technology; (11) in addition to the challenges posed by turbulence, developing and integrating a laser weapon system into an aircraft pose many technical challenges for the Air Force; (12) the Air Force must build a new laser that is able to contend with size and weight restriction, motion and vibrations, and other factors unique to an aircraft environment and yet be powerful enough to sustain a killing force over a range of at least 500 kilometers; (13) the Air Force must create a beam control system that must compensate for the optical turbulence in which the system is operating and control the direction and size of the laser beam; and (14) because these challenges will not be resolved for several years, it is too early to accurately predict whether the ABL program will evolve into a viable missile defense system.
DOD identifies 32 accounts for fiscal year 2017 under the appropriation category of O&M, including both base and OCO funding. Among the military services, each of their components receives its own O&M appropriations and has corresponding accounts—active (Army, Navy, Marine Corps, and Air Force), reserve (Army Reserve, Navy Reserve, Marine Corps Reserve, and Air Force Reserve), and National Guard (Army National Guard and Air National Guard). Additionally, there are O&M accounts for defense-wide and other DOD programs, such as the defense health program. In support of DOD’s budget request included in the annual President’s budget, DOD financial management officials prepare separate congressional budget justification materials by account for O&M base and O&M OCO. Each set of O&M budget justification materials is divided first into budget activities, such as operating forces and mobilization for the military services’ O&M accounts. For some O&M accounts, the budget activities are then divided into activity groups. For example, the defense- wide budget justification materials for O&M are divided by activity group, which represent a defense agency. For other O&M accounts, the budget activities are further divided into subactivity groups. For example, the military service justification materials for O&M are divided first into various activity groups, such as installation support and weapons support, and then into subactivity groups, such as depot maintenance and operating support for installations. DOD submits to Congress annual budget justification materials that provide details at the budget activity, activity group, or subactivity group level. Congress separately appropriates amounts for O&M base and O&M OCO activities into existing O&M base accounts. Congress directs how O&M funds are to be spent by designating specific amounts at the activity level in conference reports or explanatory statements accompanying annual appropriations acts. DOD financial management officials execute both O&M base and O&M OCO funds from the base O&M account. For example, Army financial management officials execute both O&M base funding and O&M OCO funding from the Army base O&M account. DOD conducts an annual process for determining its budget request and allocating resources. This includes developing a 5-year funding plan by appropriation that identifies the immediate budget priorities and future projections for the next 4 fiscal years, and is called the Future Years Defense Program. The Future Years Defense Program reflects decisions made in DOD’s annual budget process and represents estimated funding that the President requests from Congress for the current budget year and at least the 4 fiscal years following it. In 1987, Congress directed the Secretary of Defense to submit the 5-year funding plan, in part, to establish a mechanism to help inform DOD and Congress on current and planned funding needs as decisions are made. The 5- year funding plans are specific to DOD’s total base funding and do not include OCO funding. Provisions of annual defense appropriations and authorization acts provide DOD with authority to transfer funds. DOD can realign funds (1) between appropriations accounts through transfers and (2) within an account’s budget activity from the same appropriations account through reprogrammings. While transfers require statutory authority, DOD officials may also realign, or reprogram, O&M base funds within an appropriations account’s budget activity as part of their duty to manage their funds and do not require statutory authority to do so. For both transfers and reprogrammings, Congress requires notification of DOD’s fiscal year baseline for application of reprogramming and transfer authorities prior to funds becoming available for reprogramming or transfer. Further, if a transfer or reprogramming exceeds threshold amounts established by Congress, prior approval of a congressional committee is required. For example, effective for fiscal year 2015, the basic reprogramming threshold for O&M that requires DOD to notify Congress in writing was a cumulative increase or decrease of $15 million. However, the military services can transfer or reprogram funds that are below threshold amounts between budget activities within O&M base accounts without requiring written congressional approval. DOD’s enacted funding for O&M base has generally increased each year since fiscal year 2009, with the exception of fiscal year 2013. Enacted funding, set by Congress, establishes how much the department can obligate in a given fiscal year, unless the amounts are subsequently adjusted through additional congressional action or DOD’s use of its authorities to transfer funds between appropriations accounts. Based on DOD’s data, enacted funding for O&M base in nominal dollars increased by 7 percent from about $185.0 billion in fiscal year 2009 to about $198.5 billion in fiscal year 2016 (see figure 1). Our analysis of the budget year of DOD’s 5-year funding plans for O&M base from fiscal years 2009 through 2016 found that since fiscal year 2011, DOD consistently planned for more O&M base funding than Congress enacted. The 5-year funding plans—also known as the Future Years Defense Plan—consist of a budget year (first fiscal year) and out- years (4 subsequent fiscal years beyond the budget year), and are intended to help inform Congress on current and future planned funding needs. Congress enacted more funding than DOD planned in the fiscal year 2009 budget year and the amount enacted was the same as planned in the fiscal year 2010 plan. Since 2011 the enacted amount was less than the planned amount by between 1.8 and 7.2 percent. Figure 2 provides details on DOD’s funding plans and enacted amounts for O&M base. Further, between the funding plans for fiscal year 2009 and fiscal year 2015, planned O&M base funding in the out-years was adjusted downward relative to the previous year until fiscal year 2016, when it slightly increased. For example, in the fiscal year 2011 plan, DOD estimated that its planned funding in the out-years would increase by 12.9 percent ($27.4 billion) between the first and the last out-year of the plan (fiscal years 2012-2015), but in the fiscal year 2012 plan, DOD decreased its estimate of planned funding in the out-years (fiscal years 2013-2016) as compared to the fiscal year 2011 plan by 8.4 percent ($18.1 billion). The decrease relative to the previous year continued until the fiscal year 2016 plan when DOD adjusted its plans upward for fiscal years 2018 and 2019 from the amounts in the fiscal year 2015 plan, as shown above in figure 2. We found that various factors influenced the changes in the out-year amounts since the fiscal year 2009 plan. For example, according to DOD documents, the 5-year plan for fiscal year 2011 reflected the defense objectives outlined in the 2010 Quadrennial Defense Review and the corresponding increase in requirements to carry out those objectives. The department then began to reduce its planned growth in fiscal year 2012, as compared to fiscal year 2011, according to DOD budget documents and DOD officials, based on a variety of initiatives intended to improve the efficiency of DOD’s business operations by reducing excess overhead costs. In fiscal years 2013 through 2015, DOD further reduced its base O&M funding plans as it realigned its entire discretionary budget closer to expected appropriations. According to DOD documents and officials, this was achieved through a combination of continued efficiency initiatives and economic adjustments, among other reductions. However, in fiscal year 2016, DOD’s funding plans did not include further reductions. According to DOD’s fiscal year 2016 budget request and DOD officials, the department concluded that it could not execute its updated defense strategy at the expected appropriation level. Congress made additional funding available to DOD’s O&M base programs and activities in fiscal years 2009 through 2016. Specifically, Congress made additional funding available to DOD’s O&M base in two areas: OCO funding for programs and activities requested in the base budget and OCO funding for readiness-related efforts. OCO Funding for Programs and Activities Requested in the Base Budget: In fiscal years 2009 through 2016, according to DOD’s data, Congress made additional funding available by designating O&M supplemental or OCO funding to be used for certain O&M base programs and activities for which DOD had requested O&M base funding. For example, Congress directed additional funding from fiscal years 2009 through 2016, ranging from $405 million in fiscal year 2013 to $9.2 billion in fiscal year 2014. According to DOD officials and budget documents, Congress gave the department the approval to transfer this OCO funding for base programs and activities. OCO Funding for Readiness-Related Efforts: In fiscal year 2015, Congress provided $1 billion in OCO funding to be used for supporting DOD’s readiness efforts. According to DOD officials, this OCO funding could be used to support O&M base programs and activities that relate to readiness-related efforts, such as increased training, depot maintenance, and operations support for installations. Conversely, the sequestration in fiscal year 2013 reduced DOD’s O&M base funding when across-the-board spending reductions were applied to all nonexempt appropriations accounts across the government. The reductions resulted in a decrease of $11.9 billion to DOD’s O&M base, to $182.8 billion, the lowest level since fiscal year 2009. Figure 3 shows DOD’s enacted funding for O&M base with changes directed by Congress and as a result of sequestration. Our analysis of DOD data for the military services’ and defense-wide agencies’ O&M accounts—from fiscal years 2009 through 2015—found that DOD realigned $146.9 billion by transfers between O&M base, O&M OCO, and other appropriations accounts and reprogrammings within O&M accounts. These realigned funds represented 11 percent of the $1,336.5 billion enacted for these accounts. According to DOD and military service officials, they used existing statutory authorities to transfer (realign funds between appropriations accounts) or reprogram (realign funds from the same appropriation within an account’s budget activity) O&M funding to adjust to differences in their budget year funding plans and respond to emerging requirements, such as disaster response and new contingency operations. The officials stated that this flexibility helps the department to manage risk associated with priority missions by ensuring that resources are aligned appropriately. DOD relied on legal authority with congressional approval where necessary to realign about $71.3 billion (48.5 percent) of these funds from transfers between or reprogrammings within appropriations. DOD also reprogrammed about $75.6 billion (51.5 percent) of these funds between budget activities within accounts in amounts that did not require prior congressional approval (see figure 4). We estimated that after the department used its authorities to transfer funds, DOD’s base obligations subsequent to fiscal year 2009 were greater than amounts enacted by Congress for O&M base funding by an annual average of 5.6 percent. During the period of our review, DOD did not report O&M base obligation amounts separately from O&M OCO amounts in its budget justification materials or execution reports; therefore, we estimated base obligations for O&M. We found in 5 of the 7 fiscal years we estimated that O&M base obligations—consisting of enacted and realigned amounts—were 5.6 percent to 8.7 percent greater than congressionally enacted amounts ($10.9 billion and $16.7 billion, respectively). The exceptions were fiscal years 2009 and 2013, when DOD obligated 1 percent and 2.6 percent more, respectively, than the enacted amount, even with reductions in fiscal year 2013 resulting from sequestration (see figure 5). While overall obligations exceeded enacted amounts in each year due to transfers and reprogrammings, we also found that consistent patterns of difference existed within certain categories of spending. Among the military services’ accounts (including the active, reserves, and National Guard), we found that after the military services had reprogrammed funds from the amounts designated by Congress, their obligations for O&M base subsequent to 2009 were consistently different from designated amounts for at least 3 consecutive years in 3 out of 11 specific categories of similar O&M subactivity groups. Specifically, we found, as shown in figure 6: Base Operating Support: In each fiscal year since 2009 the military services obligated more than Congress designated for the 7-year period collectively by a total of $17.9 billion. Administrative and Management Functions: In each fiscal year since 2009, the military services obligated more than Congress designated for the 7-year period collectively by a total of about $5.4 billion. Mobilization: Since fiscal year 2012, the military services obligated more than Congress designated for the 4-year period collectively by a total of about $1.2 billion. In interviews with OUSD Comptroller officials, we discussed these consistent patterns of differences in obligations as compared with what was designated by Congress. Officials stated that it is often difficult to predict some requirements 2 years before they occur. However, in the area of base operating support, where costs are often more fixed and predictable, officials told us that they reviewed the military services’ obligations and became aware in 2015 of the Army’s consistent pattern of obligating amounts greater than Congress designated. Officials told us that they have since taken steps to better align the request with the requirement by issuing guidance to the Army to incorporate information on prior spending levels in this area within the budget request for fiscal year 2017. In addition, they discussed that in fiscal years 2011 and 2012 the difference from the designated amount for the Air Force resulted from the use of O&M base funding to support OCO requirements. In addition, our analysis found that there was no consistent pattern of differences between spending and what was designated by Congress among the two categories of subactivity groups that are most directly related to readiness—maintenance and weapon systems support and operational tempo and training. For these two categories, spending varied most years between under- and over-obligations. In addition, since fiscal year 2014, the largest magnitude of over-obligation has not been in these categories, but in base operating support as previously discussed (see figure 7). DOD has reported its O&M OCO obligations to Congress, but it has not reported its O&M base obligations. Instead, DOD has reported a combination of O&M base and OCO obligations in its O&M base budget justification materials and execution reports. Congressional budget justification materials and O&M execution reports are key documents that help Congress make appropriations decisions, conduct oversight, and provide control over funds. DOD information on O&M base obligations is important in enabling Congress to have a more complete understanding of what costs paid for by DOD’s OCO appropriations are intended for base activities. The FASAB Handbook of Federal Accounting Standards and Other Pronouncements, as Amended suggests that agencies should provide reliable and timely information on the full costs of their federal programs aimed at assisting congressional and executive decision makers in allocating federal resources and making decisions to improve operating economy and efficiency. In addition, Standards for Internal Control in the Federal Government emphasizes using quality and complete information to make decisions and communicate such information externally. The Senate Appropriations Committee’s report accompanying a bill for DOD’s fiscal year 2015 appropriations stated that the committee does not have a clear understanding of enduring activities funded by the OCO budget. The committee noted the potential for risk in continuing to fund non-contingency-related activities through the OCO budget. The committee directed the Secretary of Defense to submit a report showing the transfers of OCO funding to the base budget for fiscal year 2016 at the time of the President’s budget submission for fiscal year 2017. This request to show transfers of OCO funding to the base demonstrates that having information on base obligations at a detailed level is useful to Congress as it aims to better understand the magnitude of spending for base activities to date and in the future. OUSD Comptroller officials told us that the department has not provided the report because the evolution of threats in U.S. Central Command’s area of responsibility creates uncertainty over its enduring missions. We made a similar recommendation in 2014 that DOD develop guidance for transitioning enduring programs and activities funded through OCO appropriations to the base budget request. DOD partially concurred with that recommendation, and in the fiscal year 2016 base budget request the department proposed to outline a plan to complete this transition, beginning in 2017, by 2020. However, according to DOD officials, the department has suspended the timeline to complete this transition due to the mission uncertainty discussed above. DOD has reported O&M OCO obligations to Congress at the levels of information—that is, budget activity, activity group, or subactivity group level—presented in its O&M OCO budget justification materials by each O&M OCO account. It reported all four military services by active, reserves, and National Guard accounts; defense-wide accounts; and defense health account. However, in its O&M base budget justification materials and O&M execution reports, DOD reported combined obligations for the base and OCO appropriations in each appropriations account that receives both types of O&M appropriations. For example, in fiscal year 2009, we calculated from each account’s OCO budget justification materials that DOD’s O&M OCO obligations were $83.9 billion, and DOD reported that in the same period its total O&M obligations were $270.6 billion. Subtracting O&M OCO obligations from O&M total obligations reveals that O&M base obligations would be approximately $186.7 billion for fiscal year 2009, as we showed previously in figure 5. While the total level of O&M base obligations can be readily estimated, the effects of the realignment of funds on categories of similar subactivity groups is not so readily apparent. As discussed in the previous section, in some cases there are consistent patterns of over- obligation across certain categories of subactivity groups. According to OUSD Comptroller officials, DOD components track obligations by base and OCO appropriation, but DOD’s Financial Management Regulation—issued by the OUSD Comptroller—does not require the department to report to Congress on O&M base obligations at the levels of information presented by account in its base budget justification materials and execution reports. Additionally, the officials stated that Congress has not asked the department to report O&M base obligations separately from OCO obligations. Military service officials confirmed that they track both base and OCO O&M obligations in their financial accounting systems and provide the OCO obligation information to OUSD Comptroller for reporting purposes. However, when we asked the OUSD Comptroller officials if they could include O&M base obligations in their budget justification materials and execution reports, the officials stated that it would be resource-intensive to report O&M base obligations separately from O&M OCO at the level of information presented for each account because the information is not integrated into one common financial accounting system. Instead, the military services currently use different accounting codes in their individual financial accounting systems to track base and OCO obligations. When we discussed manually estimating O&M base obligations with OUSD Comptroller officials as we had done, they acknowledged that they have used an approach similar to ours for internal estimates of O&M base obligations for total O&M, total account, and lower-level account information. Since 1995, our work on risk to federal government operations has included DOD’s financial management as an area of high risk because it lacks accurate, timely, and useful information, among other things, that have limited DOD’s ability to ensure basic financial accountability, to prepare auditable financial statements, and to make sound decisions affecting the department’s operations. Evaluating how DOD currently collects cost information on base activities in connection with ongoing efforts to improve financial systems to address these limitations could help the department identify ways to more consistently and efficiently capture and report this information in the future. Until DOD revises its guidance to require reporting of O&M base obligations at the level of information presented for each account in its budget justification materials and execution reports, Congress will not have complete information to better understand DOD’s full funding needs for its O&M base programs and activities and to oversee the O&M budget. Although operation and maintenance accounts are the largest category of DOD’s appropriations, DOD does not report O&M base obligations to Congress separately from O&M OCO obligations in its budget justification materials and O&M execution reports. It currently has the means to collect this information for internal purposes. Further, as the department works to improve its financial systems to achieve financial auditability, DOD has the opportunity to begin collecting consistent information more efficiently in this area of its budget across the department’s various organizations. In light of federal accounting and internal control standards, agencies should inform Congress on the full costs of their programs to assist with allocating federal resources and conducting oversight. DOD could do this by requiring the inclusion of O&M base obligations at the level of information presented in each account’s reports to Congress. To ensure that Congress will have more complete information on DOD’s full funding needs for its O&M base budget and to conduct oversight of DOD’s use of OCO funds to support base programs and activities, we recommend that the Secretary of Defense direct the OUSD Comptroller to revise its guidance on preparing budget justification materials and execution reports for Congress to require the addition of O&M obligations used for base programs and activities at the level of information presented for each account. We provided a draft of this report to DOD for review and comment. In its written comments, which are summarized below and reprinted in appendix II, DOD did not concur with the recommendation. In its written comments, DOD noted that many of its financial accounting systems currently in use cannot distinguish between O&M base and OCO obligations easily, and that due to limited resources as a result of headquarters reductions, the requirement to manually identify these obligations in O&M budget justification materials and quarterly O&M execution reports will be extremely labor intensive. DOD further noted that once all DOD components convert from these financial accounting systems, the department should be able to report O&M base and OCO obligations consistently and effectively. For over two decades, we have recognized and brought attention to DOD’s reliance on financial accounting systems with significant weaknesses. In addition, we have consistently acknowledged that the reliability of DOD’s financial information will be increasingly important to the federal government’s ability to make sound resource allocation decisions. While DOD is in the process of implementing various enterprise resource planning systems to improve its financial accounting departmentwide and implement an audit ready systems environment, as required by Congress, we also recognize that DOD’s continued efforts will take time. However, in the interim, the revision we recommended to DOD’s guidance would help to establish the consistent reporting of O&M base and OCO obligations once those enterprise resource planning systems are in place. Specifically, revised guidance for preparing congressional budget justification materials and execution reports to require the addition of O&M base obligations for each O&M account would position DOD components to report O&M base obligations uniformly using their new systems. Given this, we continue to believe that implementing the recommendation would make more complete information available to Congress on the amount of funds DOD is obligating for its day-to-day programs and activities, and reflect the department’s full funding needs for its O&M base budget. DOD provided additional information in its comments as to the specific reasons its current financial accounting systems cannot easily distinguish between O&M base and OCO obligations, and how it reports O&M total and OCO obligations to Congress rather than O&M base obligations at the level of information presented for each account. As discussed in this report, implementing this recommendation would provide Congress with more detailed information about DOD’s O&M budget. DOD also provided technical comments, which we incorporated into the report, as appropriate. We are sending copies of this report to the appropriate congressional committees, and to the Secretary of Defense, Secretary of the Army, Secretary of the Air Force, and Secretary of the Navy. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (213) 830-1011 or vonaha@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Senate Report 114-49 accompanying a proposed version of the National Defense Authorization Act for Fiscal Year 2016, includes a provision for GAO to evaluate the effects of budgetary constraints on DOD’s available base funding within the O&M appropriations accounts. This report: (1) identifies the trends in enacted and planned funding for DOD’s O&M base appropriations since fiscal year 2009; (2) describes the amount of O&M funding DOD has transferred or reprogrammed, and the effect of this realignment on O&M base obligations; and (3) evaluates the extent to which DOD reported to Congress its O&M obligations for its base and OCO budgets. To identify the trends in enacted base funding for DOD’s O&M accounts since fiscal year 2009, we analyzed data on enacted funding from DOD’s budget justification materials from fiscal years 2009 through 2016. We began with fiscal year 2009 because it was the first year after the surge in both Iraq and Afghanistan and would provide us with a 7-year period of data. All enacted funding amounts are in nominal dollars as presented in the budget materials. In addition, we interviewed officials from the Office of the Under Secretary of Defense (OUSD) Comptroller and the military services’ financial management offices about O&M base funding. In some years the budget justification materials were produced while DOD was operating under a continuing resolution and contain an estimate of funding under the continuing resolution. We verified the enacted data with OUSD Comptroller officials to ensure that the data reflected enacted amounts that were also reported to Congress after a defense appropriation act in the annual Base for Reprogramming Actions report. OUSD Comptroller provided revised amounts on enacted funding as appropriate. We determined that the enacted funding amounts were sufficiently reliable for the purposes of this audit with attribution to DOD. To identify how DOD’s base funding plans for O&M compared with its enacted funding levels since fiscal year 2009, we analyzed data specific to O&M from DOD’s Future Years Defense Program summary tables. We compared information from each 5-year plan—the budget year and total for the 5 years—with the enacted funding information previously discussed. We also compared each 5-year plan with the previous plan to describe the changes between plans since fiscal year 2009. We reviewed documentary information and interviewed officials from Cost Assessment and Program Evaluation and OUSD Comptroller on the Future Years Defense Program about the changes in plans since fiscal year 2009. We verified the data on planned funding amounts with OUSD Comptroller officials. All enacted funding amounts are in nominal dollars. We determined that the data were sufficiently reliable for the purposes of this audit with attribution to DOD and the source document. To identify trends in other base funding available, we corroborated data provided by the OUSD Comptroller officials with information in the defense appropriations acts and defense appropriations joint explanatory statements. All funding amounts are in nominal dollars. We determined that the data on other funding available were sufficiently reliable for the purposes of this audit with attribution to DOD. To describe how DOD has realigned O&M funds between O&M and other appropriations through transfers, and within accounts through reprogrammings, since fiscal year 2009, we analyzed data from DOD’s 4th Quarter O&M execution reports from fiscal years 2009 through 2015 to determine the extent to which DOD transferred and reprogrammed O&M base and OCO funding. Our analysis goes through fiscal year 2015, as this was the last full year of data during our review. We calculated the value of funds realigned between and within the military services’ and defense-wide O&M accounts by two categories reported in the execution reports—prior approval transfers and reprogrammings, and below threshold reprogrammings. Funds realigned out of accounts are presented as negative numbers in DOD’s execution reports. We used the absolute value of the negative numbers to account for the total amount of the realignment. We provided the amounts obtained from the execution reports with OUSD Comptroller officials to verify the accuracy of the information. Next, we calculated the total enacted O&M funding reported in the execution reports for the military services’ and defense-wide accounts and determined the percentage of the enacted O&M funding that DOD reported moving. We also interviewed OUSD Comptroller and the military services’ financial management officials to understand DOD’s process for transferring and reprogramming funds, including any notifications to Congress about them. All fund realignment amounts are in nominal dollars, and we determined that the data were sufficiently reliable for the purposes of this audit with attribution to DOD and the source document. To understand the differences between DOD’s base obligations and the amounts that Congress designated for base O&M programs and activities, we estimated O&M base obligations because DOD is not required to report this information in its budget justification materials or execution reports separately from O&M OCO obligations, and had not reported the amounts. To estimate O&M base obligations for the O&M title, we compiled and summed O&M OCO obligations reported in the budget justification materials for each O&M base account. We then subtracted the O&M OCO obligations from the total O&M obligations reported in the O&M O-1 budget exhibit, which provides aggregate details on O&M obligations. We verified the OCO and total O&M obligation data with OUSD Comptroller officials. We compared estimated O&M base obligations with the enacted funding information previously discussed. Further, to understand the differences between O&M base obligations and congressional designations for aggregate categories of similar subactivity groups within the military services’ O&M base accounts, we compiled congressional designations, O&M total, and O&M OCO obligations by each of the subactivity groups from the military services’ O&M budget justification materials. To normalize obligations for the items that were not enacted in the military services’ O&M base accounts but were appropriated to other accounts and authorized to be transferred to the military services’ O&M base accounts for execution, we obtained data from the military services by subactivity group. Specifically, we obtained data on transfers from the Environmental Restoration and Drug Interdiction and Counterdrug Activities appropriation that were transferred into the military services’ accounts. Next, to estimate O&M base obligations, we subtracted O&M OCO obligations and obligations associated with Environmental Restoration and Drug Interdiction and Counterdrug Activities transfer amounts from total O&M obligations. We grouped the unclassified subactivity groups from the military services’ O&M budget justification materials into 11 broad categories of similar activities used in our prior sequestration work based on the activities and functions of each subactivity group (see table 1). To ensure that the budget categories and the placement of subactivity groups therein were valid, we shared our updated approach with officials from OUSD Comptroller, who did not make any suggested revisions. We then compared the congressional designations with estimated O&M base obligations for the 11 aggregate categories of similar subactivity groups within the military services’ accounts to determine the amount and percentage of difference in the same direction. We identified consistent patterns of difference, defined as at least three consecutive years with a difference. All amounts are in nominal dollars, and we determined that the data were sufficiently reliable for the purposes of this audit with attribution to DOD and the source. Lastly, to evaluate the extent to which DOD reported O&M base and OCO obligations to Congress, we reviewed the data presented in DOD’s budget justification materials and execution reports to identify the type of information available. We also reviewed DOD’s Financial Management Regulations and congressional committee report language and interviewed OUSD Comptroller officials to obtain information on the organization of the budget justification materials. We reviewed this information in light of federal internal control and accounting standards that outline how information should be recorded and communicated to management and others. We conducted this performance audit from August 2015 to August 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Tina Won Sherman (Assistant Director), Tim Carr, Susan C. Langley, Amie Lesser, Felicia M. Lopez, Kristiana D. Moore, Steve Pruitt, Richard Powelson, and Michael D. Silver made key contributions to this report.
O&M is DOD's largest category of appropriations and constitutes about 43 percent of the President's total request for DOD of $582.7 billion in fiscal year 2017. The President requested $251 billion for DOD's total O&M funding, which included approximately $206 billion for O&M base and $45 billion for O&M OCO. Senate Report 114-49 included a provision for GAO to review the effects of budgetary constraints on DOD's base funding within its O&M appropriations accounts. This report (1) identifies the trends in enacted funding for DOD's O&M base appropriations accounts since fiscal year 2009; (2) describes how much O&M funding DOD has transferred or reprogrammed, and the effect of this realignment on base obligations; and (3) evaluates the extent to which DOD reported to Congress its O&M obligations for its base and OCO budgets. GAO analyzed DOD's O&M budget justification materials and execution reports since 2009 and interviewed DOD officials. Congress enacted funding for the Department of Defense's (DOD) Operation and Maintenance (O&M) into multiple base appropriations accounts, which are used to pay for day-to-day programs and activities. This enacted funding generally has increased each year since fiscal year 2009, with the exception of fiscal year 2013, when sequestration reduced funding for O&M base. GAO found that DOD used its authorities to realign about $146.9 billion of its funding from fiscal years 2009 through 2015 (that is, moving funds through transfers from one account to another, and reprogrammings within an account). During GAO's review, the effects of such realignments on base obligations were not readily apparent because DOD did not report its O&M base obligations to Congress separately from its O&M overseas contingency operations (OCO) obligations used to support war-related programs and activities. GAO estimated O&M base obligations since fiscal year 2009 and found that DOD's realignment of funds led to its O&M base obligations exceeding O&M base enacted amounts in each fiscal year and by an annual average of 5.6 percent (see figure). DOD reported to Congress a combination of O&M base and O&M OCO obligations in its budget justification materials and execution reports, but it did not separately report its O&M base obligations by account for each of its multiple O&M base appropriations. These materials and reports are key documents that help Congress appropriate, conduct oversight of, and provide control over funds. The Senate Appropriations Committee has expressed concern in its report accompanying a bill for DOD's fiscal year 2015 appropriations that it does not have a clear understanding of OCO funding used to support DOD's day-to-day programs and activities. The services track O&M obligations by base and OCO appropriations for OCO reporting purposes, but DOD's financial management regulations do not require it to congressionally report O&M base obligations separately for each account in its budget justification materials and execution reports. By revising its guidance to require congressional reporting on O&M base obligations for each account in these materials and reports, DOD could provide complete information to assist Congress in better understanding and overseeing DOD's full funding needs for O&M base. To assist Congress in its oversight of the O&M budget, GAO recommends that DOD revise its guidance on preparing budget materials and execution reports to require the addition of O&M base obligations for each account. DOD did not concur, citing the inability of its current financial systems to easily distinguish base obligations. GAO believes the recommendation is valid as discussed in the report.
The number of people age 65 and older will nearly double in the U.S. by the year 2030 to 71 million. Over time, some elderly adults become physically or mentally incapable of making or communicating important decisions, such as those required to handle finances or secure their possessions. While some incapacitated adults may have family members who can informally assume responsibility for their decision-making, many elderly incapacitated people do not. In situations such as these, additional measures may be necessary to ensure that incapacitated people are protected from abuse and neglect. Several arrangements can be made to protect the elderly or others who may become incapacitated. A person may prepare a living will, write advance health care directives, appoint someone to assume durable power of attorney, or establish a trust. However, such arrangements may not provide sufficient protection. For example, some federal agencies do not recognize durable powers of attorney for managing federal benefits. SSA will assign a representative payee for an incapacitated person if it concludes that the interest of the incapacitated beneficiary would be served, whether or not the person has granted someone else power of attorney. In addition, many states have surrogacy healthcare decision- making laws, but these alternatives do not cover all cases. Additional measures may be needed to designate legal authority for someone to make decisions on the incapacitated person’s behalf. To provide further protection for both elderly and non-elderly incapacitated adults, state and local courts appoint guardians to oversee their personal welfare, their financial well-being, or both. The appointment of a guardian typically means that the person loses basic rights, such as the right to vote, sign contracts, buy or sell real estate, marry or divorce, or make decisions about medical procedures. If an incapacitated person becomes capable again, by recovering from a stroke, for example, he or she cannot dismiss the guardian but, rather, must go back to court and petition to have the guardianship terminated. The federal government does not regulate or provide any direct support for guardianships, but courts may decide that the appointment of a guardian is not necessary if a federal agency has already assigned a representative payee—a person or organization designated to handle federal benefits payments on behalf of an incapacitated person. Representative payees are entirely independent of court supervision unless they also serve their beneficiary as a court-appointed guardian. Guardians are supervised by state and local courts and may be removed for failing to fulfill their responsibilities. Representative payees are supervised by federal agencies, although each federal agency with representative payees has different forms and procedures for monitoring them. Each state provides its own process for initiating and evaluating petitions for guardianship appointment. Generally, state laws require filing a petition with the court and providing notice to the alleged incapacitated person and other people with a connection to that person. In many cases, both courts and federal agencies have responsibilities for protecting incapacitated elderly people. For federal agencies, a state court determination that someone is incapacitated or reports from physicians often provide evidence of a beneficiary’s incapacity, but agency procedures also allow statements from lay people to serve as a sufficient basis for determining that a beneficiary needs someone to handle benefit payments on their behalf—a representative payee. SSA, OPM, and VA ask whether the alleged incapacitated person has been appointed a guardian and often appoint that person or organization as the representative payee. In some cases, however, the agencies choose to select someone other than the court-appointed guardian. In many cases, guardians are appointed with a full range of responsibilities for making decisions about the incapacitated person’s health and well- being as well as their finances, but several states’ laws require the court to limit the powers granted to the guardian, if possible. The court may appoint a “guardian of the estate” to make decisions regarding the incapacitated person’s finances or a “guardian of the person” to make nonfinancial decisions. An incapacitated person with little income other than benefits from SSA for example, might not need a “guardian of the estate” if he or she already has a representative payee designated by SSA to act on their behalf in managing benefit payments. Sometimes the guardian is paid for their services from the assets or income of the incapacitated person, or from public sources if the incapacitated person is unable to pay. In some cases, the representative payee is paid from the incapacitated person’s benefit payments. Guardians and representative payees do not always act in the best interest of the people they are appointed to protect. Some have conflicts of interest that pose risks to incapacitated people. While many people appointed as guardians or representative payees serve compassionately, often without any compensation, some will act in their own interest rather than in the interest of the incapacitated person. Oversight of both guardians and representative payees is intended to prevent abuse by the people designated to protect the incapacitated people. While the incidence of elder abuse involving persons assigned a guardian or representative payee is unknown, certain cases have received widespread attention. Our 2004 report noted that some state laws and some courts provide more protection for incapacitated elderly people than others. State laws have varied requirements for monitoring guardianships and court practices in the states we visited also varied widely. Coordination among federal agencies and courts was quite limited and on a case-by-case basis. Since our report was issued, some states have strengthened their guardianship programs and some efforts have been made to lay the groundwork for better collaboration. However, there continues to be little coordination between state courts and federal agencies in the area of guardianships. In our 2004 review we determined that all 50 states and the District of Columbia have laws requiring courts to oversee guardianships. At a minimum, most states’ laws require guardians to submit a periodic report to the court, usually at least once annually, regarding the well-being of the incapacitated person. Many states’ statutes also authorize measures that courts can use to enforce guardianship responsibilities. However, court procedures for implementing guardianship laws appear to vary considerably. For example, most courts in each of the three states responding to our survey require guardians to submit time and expense records to support petitions for compensation, but each state also has courts that do not require these reports. We also found that some states are reluctant to recognize guardianships originating in other states. Few have adopted procedures for accepting transfer of guardianship from another state or recognizing some or all of the powers of a guardian appointed in another state. This complicates life for an incapacitated elderly person who needs to move from one state to another or when a guardian needs to transact business on his or her behalf in another state. In addition, guardianship data are scarce. Most courts we surveyed did not track the number of active guardianships, let alone maintain data on abuse by guardians. Although this basic information is needed for effective oversight, no more than one-third of the responding courts tracked the number of active guardianships, and only a few could provide the number that were for elderly people specifically. Since issuance of our report, several states have passed new legislation amending their guardianship laws. During 2004, for example, 14 states amended their laws related to guardianships, and in 2005 at least 15 states did so, according to the American Bar Association’s annual compilations. Alaska, for example, established requirements for the licensing of private professional guardians and, in January of this year, New Jersey began requiring the registration of professional guardians. Acting on legislation in 2004, the California court system established an education requirement for guardians and a 15-hour-per-year continuing education requirement for private professional guardians. In 2004 Hawaii adopted legislation requiring that guardians provide the court annual accountings. Wisconsin also adopted a major revision of its guardianship code this year; it establishes a new requirement that the guardian regularly visit the incapacitated person to assess their condition and the treatment they are receiving. The new law also leaves in effect powers of attorney previously granted by the incapacitated person unless it finds good cause to revoke them, and establishes procedures for recognition of guardianships originating in other states. Several states’ guardianship law amendments established or strengthened public guardian programs, including those in Texas, Georgia, Idaho, Iowa, Virginia, Nevada, and New Jersey. In Georgia and New Jersey, for example, public guardians must now be registered. Public guardians are public officials or publicly funded organizations that serve as guardians for incapacitated people who do not have family members or friends to be their guardian and cannot afford to pay for the services of a private guardian. In our 2004 report several courts were identified as having “exemplary” programs. As we conducted our review, we sought particular courts that those in the guardianship community considered to have exemplary practices. Each of the four courts so identified distinguished themselves by going well beyond minimum state requirements for guardianship training and oversight. For example, the court we visited in Florida provides comprehensive reference materials for guardians to supplement training. With regard to active oversight, the court in New Hampshire recruits volunteers, primarily retired senior citizens, to visit incapacitated people, their guardians, and care providers at least annually, and submit a report of their findings to court officials. Exemplary courts in Florida and California also have permanent staff to investigate allegations of fraud, abuse, or exploitation. The policies and practices associated with these courts may serve as models for those seeking to assure that guardianship programs serve the elderly well. We recently contacted officials in each of these courts and received responses from two of them. We learned that officials in these two courts have worked to help strengthen statewide guardianship programs. For example, court officials in Fort Worth, Texas, have helped encourage adoption of Texas’ recent reform legislation. However, we could not determine whether other courts had adopted these courts’ practices. There is also a role for the federal government in the protection of incapacitated people. Federal agencies administering benefit programs appoint representative payees for individuals who become incapable of handling their own benefits. The federal government does not regulate or provide any direct support for guardianships, but state courts may decide that the appointment of a guardian is not necessary if a representative payee has already been assigned. In our study, we found that although courts and federal agencies are responsible for protecting many of the same incapacitated elderly people, they generally work together only on a case-by-case basis. With few exceptions, courts and federal agencies don’t systematically notify other courts or agencies when they identify someone who is incapacitated, nor do they notify them if they discover that a guardian or a representative payee is abusing the person. This lack of coordination may leave incapacitated people without the protection of responsible guardians and representative payees or, worse, with an identified abuser in charge of their benefit payments. Since issuance of our report, we have not found any indication that coordination among the federal agencies or between federal agencies and the state courts has changed. SSA did, however, contract with the National Academies for a study of its representative payee program. The study committee issued a letter report including preliminary observations in 2005, and a final report is scheduled for release in May 2007. The committee plans to use a nationally representative survey of representative payees and the beneficiaries they serve in order to (1) assess the extent to which the representative payees are performing their duties in accordance with standards, (2) learn whether representative payment policies are practical and appropriate; (3) identify types of representative payees that have the highest risk of misuse of benefits; and (4) suggest ways to reduce the risk of misuse of benefits and ways to better protect beneficiaries. Only limited progress has been made on our recommendations. In one recommendation we suggested that SSA convene an interagency study group to increase the ability of representative payee programs to protect federal benefit payments from misuse. Although VA, HHS, and OPM indicated their willingness to participate in such a study group, SSA disagreed with this recommendation. SSA stated that its responsibility focuses on protecting SSA benefits, cited concern about the difficulty of interagency data sharing and Privacy Act restrictions, and indicated that leadership of the study group would not be within its purview. We checked with SSA recently and learned that its position has not changed. Coordination among federal agencies and between federal agencies and state courts remains essentially unchanged, according to agency and court officials we spoke with. SSA continues to provide limited information to the VA in cases where issues arise such as evidence of incapability or misuse of benefits. However, to ensure that no overpayment of VA benefits occurs, SSA will provide appropriate VA officials requested information as to the amount of Social Security benefit savings reported by the representative payee. In 2004, we also recommended that HHS work with national organizations involved in guardianship programs to provide support and leadership to the states for cost-effective pilot and demonstration projects to facilitate state efforts to improve oversight of guardianships and to aid guardians in the fulfillment of their responsibilities. Specifically, we recommended that HHS support the development of cost-effective approaches for compiling consistent national data concerning guardianships. HHS made a step in this direction by supporting a study by the American Bar Association Commission on Law and Aging of the guardianship data practices in each state, which could prove helpful in efforts to move toward more consistent and comprehensive data on guardianships. The study found that although several states collect at least some basic data on guardianships, most still do not. Only about a third of states receive trial court reports on the number of guardianship filings. A total of 33 states responded to a question about whether they were interested in compiling data. Of these, 21 expressed interest and 12 indicated that they are not interested, as the barriers are too high. Thus, it is still not possible to determine how many people in the U.S. of any age are assigned guardians each year, let alone the number of elderly people who are currently under such protection. Third, we recommended that HHS support the study of options for compiling data from federal and state agencies concerning the incidence of elder abuse in cases in which the victim had granted someone the durable power of attorney or had been assigned a fiduciary, such as a guardian or representative payee, as well as cases in which the victim did not have a fiduciary. HHS has taken a step in this direction by supporting the inclusion of questions about guardians in the National Center on Elder Abuse’s annual survey of state adult protective services agencies. Specifically, the survey asked each state about cases in which a guardian was the source of a report of abuse or was the alleged perpetrator in state fiscal year 2003. Only 11 states provided information about the source of reports of abuse. Similarly, 11 states indicated the relationship between the victims and the alleged perpetrators. Guardians were not often cited in either case. Indeed, a recent study found that existing data cannot provide a clear picture of the incidence and prevalence of elder abuse. Finally, we also recommended that HHS facilitate a review of state policies and procedures concerning interstate transfer and recognition of guardianship appointments to facilitate efficient and cost-effective solutions for interstate jurisdictional issues. The National Conference of Commissioners on Uniform State Laws (NCCUSL) met in July 2006 and issued a discussion draft for a Uniform Adult Guardianship and Protective Proceedings Jurisdiction Act. This draft contains provisions that would allow guardianships to be formally recognized by another state or transferred to another state. The draft is being refined, and a NCCUSL committee plans to discuss it at another meeting this November. Passage of this draft by the NCCUSL does not, however, guarantee that states will follow its provisions because they must decide on their own whether to amend their own laws. While little progress has been made on several of our specific recommendations, other steps taken since the release of our report are more promising. In November of 2004, a joint conference of the National Academy of Elder Law Attorneys, the National Guardianship Association and the National College of Probate Judges convened a special session to develop an action plan on guardianships. This implementation session developed a series of 45 action steps that could be taken at the national, state, and local levels in order to accomplish a select subset of the recommendations made at the 2001 Second National Guardianship Conference--the “Wingspan Conference.” These action steps fall into five main categories: the development of interdisciplinary guardianship committees at the national, state, and local levels; the development of uniform jurisdiction procedures, uniform data collection systems, and innovative funding mechanisms for guardianships; the enhancement of training and certification for guardians and the encouragement of judicial specialization in guardianship matters; the encouragement of the most appropriate and least restrictive types of guardianships; and the establishment of effective monitoring of guardianships. The identification of these action steps and the work that has begun on them reflects a high level of commitment by the professionals working in the field. In some cases work has begun on these action steps. Both the House and the Senate versions of bills calling for an Elder Justice Act would establish an Advisory Board on Elder Abuse, Neglect, and Exploitation charged with making several recommendations including some concerning the development of state interdisciplinary guardianship committees. As noted earlier, the Commission on Uniform State Law has issued a discussion draft of a Uniform Adult Guardianship and Protective Proceedings Jurisdiction Act. Wisconsin’s adoption of a reformed guardianship law this year emphasizes the use of the least restrictive type of guardianship that is appropriate. Regarding the monitoring of guardianships, recently Texas and New Jersey joined several states that now have programs in place to license, certify, or register professional guardians. In 2005, Colorado began requiring prospective guardians (with some exceptions such as parents who are seeking to be guardians for their children) to undergo criminal background checks. In conclusion, as the number of elderly Americans grows dramatically, the need for guardianship arrangements seems likely to rise in response, and ensuring that such arrangements are safe and effective will become increasingly important. Progress on fulfilling some of our recommendations has been slow where it has occurred, and for some, no steps have been taken at all. The lack of leadership from a federal agency, and states’ differing approaches to guardianship matters, make it difficult to realize quick improvements. Nonetheless, many people actively involved in guardianship issues continue to look for ways to make improvements. Emulating exemplary programs such as the four we examined would surely help, but we believe more can also be done to better coordinate across states, federal agencies, and courts. In our 2004 report we concluded that the prospect of increasing numbers of incapacitated elderly people in the years ahead signals the need to reassess the way in which state and local courts and federal agencies work together in efforts to protect incapacitated elderly people. Your Committee has played an important role in bringing these problems to light and continuing to seek improvements. In the absence of more federal leadership, however, progress is likely to continue to be slow, particularly in the coordination among federal agencies and between federal agencies and state courts. Mr. Chairman and Members of the Committee, this concludes my prepared statement. I’d be happy to answer any questions you may have. Barbara D. Bovbjerg, Director, Education, Workforce, and Income Security Issues at (202) 512-7215. Alicia Puente Cackley, Assistant Director; Benjamin P. Pfeiffer; Scott R. Heacock; Mary E. Robison; and Daniel A. Schwimer also contributed to this report. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Senate Special Committee on Aging asked GAO to follow up on its 2004 report, Guardianships: Collaboration Needed to Protect Incapacitated Elderly People, GAO-04-655 . This report covered what state courts do to ensure that guardians fulfill their responsibilities, what exemplary guardianship programs look like, and how state courts and federal agencies work together to protect incapacitated elderly people. For this testimony, GAO agreed to (1) provide an overview and update of the findings of this prior work; (2) discuss the status of a series of recommendations GAO made in that report; and (3) discuss the prospects for progress in efforts to strengthen protections for incapacitated elderly people through guardianships. To complete this work, GAO interviewed lawyers and agency officials who have been actively involved in guardianship and representative payee programs, and spoke with officials at some of the courts identified as exemplary in the report. GAO's 2004 report had three principal findings. First, all states have laws requiring courts to oversee guardianships, but court implementation of these laws varies. Second, those courts recognized as exemplary in the area of guardianships focused on training and monitoring. Third, there is little coordination between state courts and federal agencies or among federal agencies regarding guardianships. At present, these findings remain largely the same, but there are some new developments to report. Since GAO's report was issued, some states have strengthened their guardianship programs. For example, Alaska established requirements for licensing of private guardianships and New Jersey and Texas established requirements for the registration of professional guardians. However, there continues to be little coordination between state courts and federal agencies or among federal agencies in the protection of incapacitated people. GAO's report made recommendations to federal agencies, but to date little progress has been made. GAO recommended that SSA convene an interagency study group to increase the ability of representative payee programs to protect federal benefit payments from misuse. Although VA, HHS, and OPM indicated their willingness to participate in such a study group, SSA disagreed with this recommendation, and its position has not changed. Second, GAO recommended that HHS work with national organizations involved in guardianship programs to provide support and leadership to the states for cost-effective pilot and demonstration projects to facilitate state efforts to improve oversight of guardianships and to aid guardians in the fulfillment of their responsibilities. HHS did support a study that surveyed the status of states' guardianship data collection practices. HHS also supported a National Center on Elder Abuse survey of adult protective services agencies to collect information including the extent to which guardians are the alleged perpetrators or the sources of reports about elder abuse. Third, GAO recommended a review of state policies and procedures concerning interstate transfer and recognition of guardianship appointments. A National Conference of Commissioners on Uniform State Laws, held in July of this year, issued a discussion draft for a uniform state law addressing these issues. Following issuance of GAO's 2004 report, a joint conference of professional guardianship organizations agreed on a set of action steps to implement previously-released recommendations from a group of experts on adult guardianship, known as the Wingspan recommendations. Among other things, these action steps call for licensing, certifying, or registering professional guardians.
A majority of the nation’s wastewater is treated by publicly owned treatment works that serve a variety of customers, including private homes, businesses, hospitals, and industry. These publicly owned treatment works are regulated by the Clean Water Act. Wastewater treatment includes a collection system (the underground network of sewers) and a treatment facility. Wastewater enters the treatment facility through the collection system, where it undergoes an initial stage called primary treatment, during which screens remove coarse solids, and grit chambers and sedimentation tanks allow solids to gradually sink. Next, wastewater enters secondary treatment, where bacteria consume most of the organic matter in the wastewater. After these processes, wastewater is disinfected to eliminate remaining pathogens and other harmful microorganisms. Wastewater facilities typically use both chemical and physical disinfection methods, including the following: Chlorine gas. Injecting chlorine gas into a waste stream has been the traditional method of disinfecting wastewater. Chlorine gas is a powerful oxidizing agent, is relatively inexpensive, and can be stored for an extended period of time as a liquefied gas under high pressure. Also, the residual chlorine that remains in the wastewater effluent can prolong disinfection after initial treatment. However, chlorine gas is extremely volatile and hazardous, and it requires specific precautions for its safe transport, storage, and use. Because it is stored and transported as a liquefied gas under pressure, if accidentally released, chlorine gas can quickly turn into a potentially lethal gas. EPA requires, among other things, that any facility storing at least 2,500 pounds of chlorine gas prepare a risk management plan that lays out accident prevention and emergency response activities. At certain concentrations, the residual chlorine that remains in wastewater effluent is toxic to aquatic life, so wastewater facilities that use chlorine compounds may also need to dechlorinate the treatment stream before discharging it to receiving waters. Chlorine can also oxidize certain types of organic matter in wastewater, creating hazardous chemical byproducts, such as trihalomethanes. Our March 2006 report found that many large wastewater facilities have discontinued, or are planning to discontinue using chlorine gas as a disinfectant in favor of alternative disinfection methods such as sodium hypochlorite delivered in bulk to the facility. Of the 206 large wastewater facilities responding to our survey, only 85 facilities indicated they currently use chlorine gas, and 20 of these facilities plan to switch from the gas to another disinfectant. Sodium hypochlorite. Injecting sodium hypochlorite—essentially a concentrated form of household bleach—into a waste stream is another chlorination method of disinfecting wastewater. Sodium hypochlorite is safer than chlorine gas because, if spilled, it remains liquid and can be contained and recovered. For this reason, it is not subject to EPA’s risk management planning requirements. However, sodium hypochlorite is more expensive than chlorine gas, and it degrades quickly if it is exposed to sunlight or is not kept at proper temperatures. For this reason, properly storing delivered sodium hypochlorite in the concentration necessary to disinfect wastewater may require an on-site building with environmental controls. Sodium hypochlorite can also be generated on-site at a wastewater facility using an “electrochlorination system” that produces sodium hypochlorite through an electrical reaction with high-purity salt and softened water. Facilities choosing this method of disinfection reduce chemical costs, but face increased electrical costs from the generation equipment. Because it is a chlorine compound, wastewater facilities using sodium hypochlorite must also be concerned with residual chlorine and hazardous chemical byproducts, such as trihalomethanes. Ultraviolet light. This disinfection method uses ultraviolet lamps to break down disease-causing microorganisms in wastewater. Wastewater passes through an open channel with lamps submerged below the water level. The lamps transfer electromagnetic energy to an organism’s genetic material destroying the ability of its cells to reproduce. Because ultraviolet light is a physical process rather than a chemical disinfectant, it eliminates the need to generate, handle, transport, or store hazardous and corrosive chemicals. In addition, there are no harmful residual effects to humans or aquatic life. However, ultraviolet light disinfection may not be effective given the turbidity of some wastewater streams. Wastewater facilities using ultraviolet instead of chlorine gas or delivered sodium hypochlorite for disinfection will face additional costs to maintain lamps and increased electrical costs. Ozone. This disinfection method feeds ozone generated on-site from oxygen exposed to a high-voltage current into a contact chamber containing wastewater. According to EPA, ozone is very effective at destroying viruses and bacteria, but it is the least used disinfection method in the United States largely because of its high capital and maintenance costs compared to available alternatives. According to EPA, vulnerability assessments help water systems evaluate susceptibility to potential threats such as vandalism or terrorism and identify corrective actions that can reduce or mitigate the risk of serious consequences. The Public Health Security and Bioterrorism Preparedness and Response Act of 2002 (the Bioterrorism Act) required drinking water utilities serving populations greater than 3,300 to complete vulnerability assessments by June 2004. Wastewater facilities are not required by law to complete vulnerability assessments. Congress has considered bills that would have encouraged or required wastewater treatment plants to assess vulnerabilities, but no such requirement has become law. In our March 2006 report on wastewater facility security efforts, we found that many large wastewater facilities have either completed a vulnerability assessment or had one underway. Of the 206 large wastewater facilities that responded to our survey, 106 facilities—or 51 percent—reported that they had completed a vulnerability assessment or were currently conducting one. Several other facilities indicated they had conducted or planned to conduct other types of security assessments. Facilities cited several reasons for completing a vulnerability assessment or some other type of security assessment, but most—roughly 77 percent—reported doing so on their own initiative. Many facilities indicated they were combined systems—facilities that manage both drinking water and wastewater treatment. As such, 37 percent of facilities reported that they did some type of security assessment in conjunction with the required assessment for their drinking water facility. The Clean Air Act requires wastewater facilities that use or store more than 2,500 pounds of chlorine gas to submit to EPA a risk management plan that lays out accident prevention and emergency response activities. Under this act, EPA requires that about 15,000 facilities—including chemical, water, energy, and other sector facilities—that produce, use, or store more than threshold amounts of chemicals posing the greatest risk to human health and the environment take a number of steps to prevent and prepare for an accidental chemical release. EPA regulations implementing the Clean Air Act require that the owners and operators of chemical facilities include a facility hazard assessment, an accident prevention program, and an emergency response program as part of their risk management plans. The regulations required that a summary of each facility’s risk management plan be submitted to EPA by June 21, 1999. The plans are to be revised and resubmitted to EPA at least every 5 years, and EPA is to review them and require revisions, if necessary. Although accurate information on the costs of vulnerability assessments and risk management plans is limited, available estimates suggest that their costs vary considerably. A factor contributing to the cost differential was whether they were contracted to third parties (such as engineering consulting firms) or prepared in-house with existing staff. Despite higher costs, some facilities preferred using contractors because their expertise and independence lent credibility to their assessments, which may be useful in obtaining support for security-related upgrades. Costs generally did not relate to facility size, as measured by million of gallons of wastewater treated per day. The reported cost of preparing vulnerability assessments at the 20 large wastewater facilities where we interviewed officials ranged from $1,000 to $175,000. Whether the assessment was done in-house with existing staff or contracted to a third party was a factor contributing to the cost differences. Officials from several facilities told us they used contractors to complete vulnerability assessments in 2002. For example, staff at the Denver Metro Wastewater Reclamation District reported that a contractor completed a vulnerability assessment in November 2002 for its Central Treatment Plant, which treats 130 million gallons of wastewater per day, at an estimated cost of $175,000. Of this cost, $100,000 was for the contractor, and $75,000 was estimated for in-house staff time. Other large wastewater facilities that reported completing vulnerability assessments in 2002 were part of combined systems that provide both drinking water and wastewater services. These systemwide vulnerability assessments were done before the 2002 Bioterrorism Act required drinking water utilities serving populations greater than 3,300 to complete vulnerability assessments by June 2004. The combined systems that conducted systemwide vulnerability assessments include the following: San Antonio Water System (San Antonio, Texas). According to system staff, a contractor completed a systemwide vulnerability assessment for all its drinking water, wastewater, and related infrastructure in August 2002 for $112,000. Staff did not provide an estimate of in-house costs related to the assessment, but prorated the wastewater treatment plants costs related to this contract at $37,000: $25,000 for its Dos Rios plant, which treats 70 million gallons per day; $5,000 each for its Leon Creek and Salado Creek plants, which treat 33 million gallons per day; and $2,000 for its Medio Creek plant, which treats 5 million gallons per day. The Phoenix Water Services Department (Phoenix, Arizona). According to department staff, a contractor completed a systemwide vulnerability assessment for its five drinking water plants, three wastewater plants, and related infrastructure in November 2002 for $479,725. Staff did not provide an estimate of in-house cost related to the assessment, but estimated the contract costs related to its largest wastewater treatment plant, the 91st Avenue Sewage Treatment Plant, which treats 140 million gallons per day, to be $100,000. Fort Worth Water Department (Fort Worth, Texas). According to department staff, a contractor completed a systemwide vulnerability assessment for its four drinking water plants and one wastewater treatment plant in December 2002 at a cost of $292,300. Staff did not provide an estimate of in-house cost related to the assessment, but estimated the contract costs related to its Village Creek Wastewater Treatment Plant, which treats 96 million gallons per day, at $73,075. Wastewater facility managers cited several reasons for using contractors to complete vulnerability assessments. Staff with the Phoenix Water Services Department told us they used contractors for their vulnerability assessment because a citywide policy required that contract services be used whenever possible. Staff at other wastewater facilities told us that, despite the higher costs, they preferred to use contractors because of their expertise. According to a wastewater security official, contractor expertise and independence can give contractor findings and recommendations greater credibility with utility governing boards that determine spending priorities. One manager told us that he used a contractor for a 2002 vulnerability assessment because risk management software and tools were not yet available. After the events of September 11, 2001, EPA provided funding to the Association of Metropolitan Sewerage Agencies to develop software, called the Vulnerability Self Assessment Tool (VSAT), for water utilities to use to develop vulnerability assessments. According to a Water Environment Federation (WEF) official, VSAT became available in June 2002. This official also said that EPA provided funding to WEF to provide training workshops to wastewater utilities on how to use VSAT to conduct vulnerability assessments beginning October 2002. According to interviews with wastewater facility managers, large wastewater facilities that prepared vulnerability assessments in-house with existing staff reported lower costs for preparing the document. These include the following: City of Ventura Public Works Department (Ventura, California). According to facility staff, in-house staff completed a vulnerability assessment in March 2003 for the Ventura Water Reclamation Facility, which treats 9 million gallons per day, at a cost of roughly $1,000 in staff time. Facility staff participated in VSAT training sponsored by EPA and completed the assessment using this tool. City of Fort Wayne Utilities Division (Fort Wayne, Indiana). According to facility staff, in-house staff completed a vulnerability assessment in November 2005 for the Fort Wayne Water Pollution Control Plant, which treats 43 million gallons per day, at undetermined staff time. Facility staff participated in VSAT training and updated a previous risk assessment prepared for the facility by a contractor in 2000 at a contracted cost of $10,000. City of Eugene Wastewater Division (Eugene, Oregon). According to facility staff, in-house staff completed a vulnerability assessment in October 2005 for the Eugene/Springfield Regional Water Pollution Control Facility, which treats 38 million gallons per day, for about $2,000 in staff time. City of Cedar Rapids Department of Water Pollution Control (Cedar Rapids, Iowa). According to facility staff, in-house staff completed a vulnerability assessment in January 2007 for the Cedar Rapids Wastewater Treatment Plant, which treats 35 million gallons per day, for about $5,000 in staff time. Detroit Water and Sewerage Department (Detroit, Michigan). According to department staff, in-house staff completed a vulnerability assessment in January 2005 for the Detroit Wastewater Treatment Plant, which treats 700 million gallons per day, for about $20,000 in staff time. Costs to prepare risk management plans ranged from less than $1,000 for facilities that completed the plan in-house to over $31,000 for facilities that used contractors. Costs to update risk management plans were generally less, ranging from less than $1,000 to $20,000 depending upon whether facilities used in-house staff or contractors. Costs were generally higher at facilities that used contractors. These include the following: The Phoenix Water Services Department (Phoenix, Arizona). According to department staff, a contractor completed risk management plans for all the system’s drinking and wastewater facilities in 1999 for $230,086. Costs for the 91st Avenue Sewage Treatment Plant were prorated at $28,761. Department staff said a contractor updated the 91st Avenue plant’s risk management plan in 2004 for $20,000. Fort Worth Water Department (Fort Worth, Texas). According to department staff, a contractor completed risk management plans for all of the department’s drinking water and wastewater facilities in 1999 for $124,718. Costs related to the Village Creek Wastewater Treatment Plant’s risk management plan were prorated at $31,100. Department staff reported that the contractor later updated these risk management plans for $18,040 in 2004, $4,510 of which was for the Village Creek plant. City of Fort Wayne Utilities Division (Fort Wayne, Indiana). According to facility staff, a contractor completed a risk management plan in 2001 for the Fort Wayne Water Pollution Control Plant for $16,000. Facility staff reported a contractor updated the plan in 2005 for $6,000. South Central Regional Wastewater Treatment and Disposal Board (Delray Beach, Florida). According to facility staff, a contractor completed a risk management plan in 1999 for the South Central Regional Wastewater Treatment and Disposal Plant, which treats 18 million gallons per day, for $10,000. Facility staff reported a contractor updated it in 2006 for $2,000. City of Portland Bureau of Environmental Services (Portland, Oregon). According to bureau staff, a contractor completed a risk management plan in 1999 for its Columbia Boulevard Wastewater Treatment Plant, which treats 143 million gallons per day, for $30,000. Bureau staff reported they updated the plan using in-house staff in 2004 for $10,000 in staff time. Other large wastewater facilities that prepared risk management plans in- house with existing staff reported lower costs for preparing the documents. These include the following: San Antonio Water System (San Antonio, Texas). According to system staff, in-house staff completed a risk management plan in 1999 for the Dos Rios Wastewater Treatment Plant for between $5,000 and $10,000 in staff time. In-house staff updated the plan in 2004 for less than $1,000 in staff time. City of Cedar Rapids Department of Water Pollution Control (Cedar Rapids, Iowa). According to facility staff, in-house staff completed a risk management plan in January 2000 for the Cedar Rapids Wastewater Treatment Plant for $5,000 in staff time. In-house staff updated the plan in 2004 for about $250 in staff time. According to district staff, in-house staff completed a risk management plan in 1999 for $10,000 in staff time. In-house staff updated the plan in 2006 for about $1,000 in staff time. City of Savannah Water and Sewer Bureau (Savannah, Georgia). According to facility staff, in-house staff completed a risk management plan in 1999 for the President Street Water Pollution Control Plant, which treats 17 million gallons per day, at a cost of only $150 in staff time. In-house staff updated the plan in 2006 for about $130 in staff time. Large wastewater facilities that convert from chlorine gas disinfection to alternative disinfection processes incur widely varying capital costs, which generally depend on the alternative treatment chosen and facility size. Other factors that affect capital costs include the characteristics of individual facilities, such as whether existing structures can be used, and local factors, such as building costs. Alternative disinfection processes may also pose higher annual operating costs than chlorine gas. However, these costs may be offset, at least somewhat, by savings in training and labor costs, and regulatory burdens associated with the handling of chlorine gas. Some facilities even reported or projected net annual cost savings related to wastewater disinfection. The 23 large wastewater facilities that we interviewed reported capital costs for chlorine conversion ranging from $646,922 to just over $13 million. Table 1 identifies the 23 large wastewater facilities that recently converted or plan to convert from chlorine gas to another disinfection method and their reported and planned capital conversion cost. As shown in the table, 17 of the 23 facilities converted or plan to convert to sodium hypochlorite delivered in bulk to the facility. Officials with several of these facilities told us they considered ultraviolet disinfection, but chose delivered sodium hypochlorite because of its lower capital conversion costs. The remainder converted or plan to convert to sodium hypochlorite generated on-site or ultraviolet light. None of the facilities we contacted adopted ozone. Interview responses indicate that several factors affect the cost of conversion; among these are disinfection method chosen, facility size, key facility characteristics such as available buildings, and whether the conversion was permanent or temporary, as follows. Generally, conversion to delivered sodium hypochlorite has the lowest capital costs, followed by sodium hypochlorite generated on-site, and followed again by ultraviolet light. This observation is supported by cost estimates in the Chlorine Gas Decision Tool, a software program released by DHS in March 2006. The decision tool was designed to provide water and wastewater utilities with the means to conduct assessments of alternatives to chlorine gas disinfection. DHS cautions that the final costs of the disinfection systems will depend on project design details, actual labor and material costs, competitive market conditions, actual site conditions, final project scope, implementation schedule, continuity of personnel and engineering, and other variable factors. With these caveats, the decision tool estimates that for a wastewater facility with an average disinfection flow of 10 million gallons per day and a peak disinfection flow of 20 million gallons per day, capital costs for conversion to delivered sodium hypochlorite would amount to $533,000, on-site generation of sodium hypochlorite would total $1,238,000, and ultraviolet disinfection would reach $1,526,000. Our interviews with wastewater facilities provide specific examples of conversion costs. For example, managers of the Chesapeake-Elizabeth Treatment Plant, which treats 21 million gallons per day and serves customers in Virginia Beach, Virginia, reported spending an estimated $1,225,000 in 2004 converting to bulk sodium hypochlorite disinfection. Managers of the comparably sized Western Branch Wastewater Treatment Plant, which treats 20 million gallons per day and serves customers in Laurel, Maryland, estimated that they will spend $4 million converting to ultraviolet light disinfection by January 1, 2008. Managers of the Western Branch plant indicated that one reason they chose the more expensive ultraviolet treatment option over bulk deliveries of sodium hypochlorite was to avoid the risk to local traffic that could result from additional deliveries to the plant. Plant managers indicated that because sodium hypochlorite degrades more quickly than chlorine gas, truck deliveries would increase under a disinfection system using sodium hypochlorite. They also noted that ultraviolet light disinfection would eliminate the need for the facility to handle and store significant amounts of hazardous and corrosive chemicals. In addition to disinfection method chosen, facility size can also influence capital conversion costs. In general, larger facilities spend more converting to alternative disinfection methods. For example, because larger facilities process a greater flow of wastewater, converting to delivered sodium hypochlorite would require a larger sodium hypochlorite storage building or buildings relative to a smaller facility. It may also require additional pumps, instrumentation, and piping to deliver treatment chemicals to a greater number of contact tanks. Importantly, the largest facilities also tend to serve high-cost urban areas, and their conversion costs reflect the higher costs for construction materials and contract labor in these markets. For example, the Blue Plains Wastewater Treatment Plant, which treats 307 million gallons per day and serves over 2 million customers in the Washington, D.C., metropolitan area, converted from chlorine gas to delivered sodium hypochlorite in 2003 at a cost of almost $13 million. According to facility managers, the facility temporarily converted from chlorine gas to delivered sodium hypochlorite in April 2002 at a cost of $500,000, primarily for storage tanks, pumps, piping, and related instrumentation. It completed the permanent conversion in October 2003 at an added cost of about $12.5 million, which included the purchase of additional storage tanks, related pumps, piping and instrumentation, and the construction of storage facilities for sodium hypochlorite and sodium bisulfate (used for dechlorination). In addition to facility size, other physical characteristics related to individual facilities also play a large role in conversion costs. For instance, the availability of usable buildings on facility grounds will determine whether a facility needs to construct, expand, or update a building to properly house sodium hypochlorite and its associated metering equipment. In addition, the distance between the storage building and treatment tanks will determine the amount of piping needed to deliver stored sodium hypochlorite to the treatment tanks. An example comes from the Hampton Roads Sanitation District which provides wastewater treatment to approximately 1.6 million people in 17 cities and counties in southeast Virginia, including the cities of Newport News, Norfolk, Suffolk, Virginia Beach, and Williamsburg. In 2004, the sanitation district converted from chlorine gas to bulk sodium hypochlorite disinfection at two of its plants—the Nansemond Treatment Plant, which treats 17 million gallons per day for the city of Suffolk, and the previously mentioned Chesapeake- Elizabeth plant, which treats 21 million gallons per day. The Nansemond plant conversion cost an estimated $1.65 million, while the slightly larger Chesapeake-Elizabeth plant conversion cost about $1.2 million. Costs were higher at the Nansemond plant because a building needed to be constructed for sodium hypochlorite storage, while the Chesapeake- Elizabeth plant had an existing building that only needed to be upgraded to properly store the chemical. Federal discharge permit requirements related to individual treatment facilities can also influence conversion costs. Certain wastewater facilities may be allowed higher chlorine residuals in treated effluent because they discharge into less sensitive waters. Often, these facilities do not have to dechlorinate wastewater, saving the facility the cost of dechlorination chemicals, equipment, and storage. For example, the Philadelphia-area Southeast and Northeast Wastewater Treatment Plants, which treat 90 and 190 million gallons per day, respectively, need only to chlorinate water prior to discharging into the Delaware River. Both plants were converted to delivered sodium hypochlorite—the Southeast plant in 2006 at an estimated cost of $1.9 million and the Northeast plant in 2003 at an estimated cost of $2.6 million. In contrast, the Baltimore-area Back River Wastewater Treatment Plant, which treats 150 million gallons per day and discharges into the ecologically sensitive Chesapeake Bay, must chlorinate and dechlorinate its wastewater before discharge. This facility converted to delivered sodium hypochlorite in 2004 at a reported cost of $3.3 million. Finally, some facilities have reduced conversion costs in the short term through temporary conversions. For example, the Metropolitan Sewer District of Greater Cincinnati decided to convert its Mill Creek Wastewater Treatment Plant, which treats 120 million gallons per day, from chlorine gas to sodium hypochlorite disinfection soon after September 11, 2001. According to the plant manager, by mid-October 2001, the facility had begun disinfecting with sodium hypochlorite by hooking up a rented sodium hypochlorite trailer to its disinfection system at a cost of $25,000. By May 2002, the facility had completed an interim conversion to sodium hypochlorite by purchasing and installing two 8,000 gallon outdoor storage tanks for sodium hypochlorite at a cost of $60,000. According to the plant manager, this interim disinfection system is still in use today, though the plant intends to permanently convert to delivered sodium hypochlorite in 2008 or 2009 at an estimated cost of $3 million. The plant manager said the permanent conversion would include an unloading station for sodium hypochlorite deliveries and a new storage building for the chemical and related instrumentation. The plant manager said the new storage building was needed to reduce the decay of stored sodium hypochlorite. The plant manager added that the storage building and additional piping would improve plant safety because it would allow for central storage and delivery of sodium hypochlorite. Currently, sodium hypochlorite deliveries are made at several plant locations for odor control which, according to the plant manager, increase the odds the chemical may be mishandled and accidentally mixed with other reactant chemicals used at the plant, such as ammonia. Similarly, the Eastern Water Reclamation Facility, which treats 16 million gallons per day and provides service to Orange County, Florida, converted from chlorine gas to sodium hypochlorite disinfection at a cost of $60,000 in November 2001 through the addition of outdoor storage tanks and related pumps. According to the plant manager, the facility may consider additional changes in the future, such as permanent sodium hypochlorite storage or on-site generation. Changes in annual costs related to disinfection treatment conversions were hard to measure due to lack of data. Many facilities we interviewed were unable to provide complete information on annual costs related to disinfection before and after converting from chlorine gas. Available data show that annual chemical costs related to disinfection increased for facilities that converted to delivered sodium hypochlorite, because sodium hypochlorite costs more than chlorine gas. Available data also show that electrical costs related to disinfection increased for facilities that converted to on-site generation of sodium hypochlorite or ultraviolet light treatment, however these facilities also saw large reductions in chemical costs. Available data also show that increases in annual costs related to disinfection were offset somewhat by savings in training and regulatory requirements, as several facilities that converted reported a reduced need for staff time devoted to complying with the EPA risk management planning that was required when the plant used chlorine gas. A few facilities were even able to report or project annual savings due to the disinfection conversion. For example, the wastewater treatment manager of the Columbia Boulevard Treatment Plant, which treats 143 million gallons per day and provides wastewater service to Portland, Oregon, estimated that annual costs related to disinfection fell by over $100,000 after the plant completed a 2005 conversion from chlorine gas to delivered sodium hypochlorite disinfection. According to the wastewater treatment manager, increases in disinfection chemical costs for the plant were more than offset by reductions in electrical, labor, and training costs. Electrical power costs fell because the plant no longer had to power chlorine gas evaporators, which heat and help convert the pressurized liquid into gas before it is injected into the waste stream. In contrast, sodium hypochlorite is fed into the waste stream via less energy-intensive pumps. Labor and training costs also fell because the plant no longer had to meet the Occupational Safety and Health Administration’s (OSHA) Process Safety Management of Highly Hazardous Chemicals standard, and risk management and emergency response planning costs associated with the use of chlorine gas were eliminated. In another example, the South Central Regional Wastewater Treatment and Disposal Plant, which treats 18 million gallons per day for customers in the cities of Delray Beach and Boynton Beach, Florida, predicts that it too will achieve annual savings once it converts from chlorine gas to sodium hypochlorite generated on-site, which it anticipates completing in September 2007. According to the Executive Director of the South Central Regional Wastewater Treatment and Disposal Board, potential disruptions of sodium hypochlorite delivery during hurricane seasons motivated them to begin generating their disinfection chemicals on-site. The plant’s most recent fiscal year operating and maintenance budget for disinfection is estimated to be roughly $307,000 for chlorine gas and associated costs including equipment and maintenance, labor, and risk management planning. Postconversion annual operating and maintenance costs for disinfection are estimated to fall to $205,000 in the 2008 calendar year, primarily due to the suspension of chlorine gas purchases. We provided a draft of this report to EPA for review and comment. In its letter, reproduced in appendix II, EPA concurred with the results of the report. EPA’s Water Security Division in the Office of Ground Water and Drinking Water provided technical comments and clarifications that were incorporated, as appropriate. As agreed with your office, unless you publicly release the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees; interested Members of Congress; the Administrator, EPA; and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff need further information, please contact me at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To identify the costs of preparing vulnerability assessments and risk management plans, we conducted structured telephone interviews with a select sample of large wastewater facilities identified as having completed these documents in our March 2006 report. Our March report identified 106 large facilities that reported they had prepared vulnerability assessments or had one underway, and 85 facilities that were required to prepare risk management plans because they currently used chlorine gas as a disinfectant. From these two groups, we identified 47 facilities that reported that they had prepared vulnerability assessments and currently use chlorine. Of this universe, we chose a nonprobability sample of 25 facilities to assure geographic dispersion and adequate variation in size, since these factors were likely to influence their costs. We completed structured interviews with 20 of the remaining 25 facilities. We sent an interview schedule in advance of each of the interviews. We completed the structured interviews between November 2006 and February 2007. Reported costs included both actual and estimated costs. For estimated costs, we asked facility managers to explain how they arrived at these estimates. Reported costs were not adjusted for inflation. To identify the costs incurred by wastewater facilities in converting from gaseous chlorine to an alternative disinfection process, we conducted structured telephone interviews with a nonprobability sample of 26 of the 38 large facilities identified in the March report as having recently converted or planning to convert from chlorine gas to an alternative disinfection process. We sent an interview schedule in advance of each of the interviews. We completed the structured interviews between October 2006 and February 2007. Reported costs included both actual and estimated costs. For estimated costs, we asked facility managers to explain how they arrived at these estimates. Reported costs were not adjusted for inflation. We also conducted site visits with some of the facilities. Where available, we gathered documentation, such as capital plans, from these facilities in order to document conversion costs. We supplemented the cost information we gathered at individual wastewater facilities with information obtained at the Environmental Protection Agency, the Department of Homeland Security, nongovernmental organizations, and industry representatives. We determined that reported cost data were sufficiently reliable to provide useful information about the costs for preparing vulnerability assessments, risk management plans, and conversions from gaseous chlorine and the factors that affect these costs. We conducted our work between August 2006 and March 2007 in accordance with generally accepted government auditing standards. In addition to the contact named above, Jenny Chanley, Steve Elstein, Nicole Harris, Greg Marchand, Tim Minelli, Alison O’Neill, Daniel Semick, and Monica Wolford made key contributions to this report.
In 2006, GAO reported that many large wastewater facilities have responded to this risk by voluntarily conducting vulnerability assessments and converting from chlorine gas to other disinfection methods. The Clean Air Act requires all wastewater facilities that use threshold quantities of chlorine gas to prepare and implement risk management plans to prevent accidental releases and reduce the severity of any releases. In this study, GAO was asked to provide information on (1) the range of costs large wastewater treatment facilities incurred in preparing vulnerability assessments and risk management plans, and (2) the costs large wastewater treatment facilities incurred in converting from chlorine gas to alternative disinfection processes. To answer these questions, GAO conducted structured telephone interviews with a number of facilities surveyed for the 2006 report. The Environmental Protection Agency (EPA) agreed with the report and provided several technical changes and clarifications. Among the large wastewater facilities GAO examined, the costs reported to prepare vulnerability assessments ranged from $1,000 to $175,000, while costs to prepare risk management plans ranged from less than $1,000 to over $31,000. Whether the documents were prepared in-house or contracted to third parties such as engineering firms was a factor in cost differences. Despite higher costs, some facilities preferred to use contractors due to their expertise and independence. According to one wastewater security official, these attributes can give contractor findings and recommendations greater credibility with utility governing boards that determine spending priorities. One facility that used a contractor to complete a vulnerability assessment in 2002 did so because, at the time, vulnerability assessment software and training were not widely available. Since that time, EPA has increased funding for the development and dissemination of risk assessment software and related training. Overall, cost estimates for vulnerability assessments and risk management plans did not relate to facility size, as measured by millions of gallons of wastewater treated per day. For the large wastewater facilities GAO examined, reports of actual and projected capital costs to convert from chlorine gas to alternative disinfection methods range from about $650,000 to just over $13 million. Most facilities converted, or planned to convert, to delivered sodium hypochlorite (essentially a concentrated form of household bleach shipped in bulk to the facility). Managers of these facilities told GAO they considered other options, but chose delivered sodium hypochlorite because its capital conversion costs were lower than those associated with other alternatives, such as generating sodium hypochlorite on-site or using ultraviolet light. Overall, the primary factors associated with facilities' conversion costs included the type of alternative disinfection method chosen and the size of the facility. Other cost factors facility managers cited included (1) whether existing buildings and related infrastructure could be used in the conversion, (2) labor and building supply costs, which varied considerably among locations, (3) the cost of sodium hypochlorite relative to chlorine gas, and (4) the extent to which training, labor, and regulatory compliance costs were reduced for utilities that no longer had to rely on chlorine gas.
BIE’s mission is to provide Indian students quality education opportunities starting in early childhood in accordance with a tribe’s needs for cultural and economic well-being. Students attending BIE schools must be members of federally-recognized Indian tribes, or descendants of members of such tribes, and reside on or near federal Indian reservations. BIE’s Indian education programs derive from the federal government’s trust relationship with Indian tribes, a responsibility established in federal statutes, treaties, court decisions, and executive actions. BIE, formerly known as the Office of Indian Education Programs when it was part of the Bureau of Indian Affairs (BIA), was renamed and established as a separate bureau within Interior in 2006. Organizationally, BIE is under the Office of the Assistant Secretary-Indian Affairs, and its director reports to the Principal Deputy Assistant Secretary-Indian Affairs. The BIE director is responsible for the direction and management of education functions, including the formation of policies and procedures, supervision of all program activities, and approval of the expenditure of funds for education functions. BIE is composed of a central office in Washington, D.C.; a major field service center in Albuquerque, New Mexico; 3 associate deputy directors’ offices located regionally (1 in the east and 2 in the west); 22 education line offices located on or near Indian reservations; and schools in 23 states. Of the 185 elementary and secondary schools BIE administers, 59 are directly operated by BIE (BIE- operated), and 126 are operated by tribes (tribally-operated) through federal contracts or grants. BIE provides funding on the same terms to tribally-operated schools as it does to BIE-operated schools based on the number of students attending the schools, among other factors. A local education line office manages the BIE-operated schools, functioning like a public school district superintendent’s office. It also provides technical assistance to tribally-operated as well as BIE-operated schools. While BIE schools are primarily funded through Interior, they receive annual formula grants from Education, similar to public schools. Like state educational agencies that oversee public schools in their respective states, BIE administers and monitors the operation of these Education grants. The Elementary and Secondary Education Act (ESEA) of 1965, as amended, holds recipient schools accountable for improving their students’ academic performance, using Education program funds they receive through BIE. Specifically, under ESEA, schools must be measured to determine whether they are making adequate yearly progress (AYP) in meeting standards in math, reading, and science. In turn, the performance information must be reported to parents. Interior determined that, to measure AYP, each BIE school would use the definitions used by the state in which the school was located. BIE and its predecessor, the Office of Indian Education Programs, have been through a number of restructuring efforts. Before 1999, BIA’s regional offices were responsible for most administrative functions for Indian schools. In 1999, the National Academy of Public Administration (NAPA) issued a report, commissioned by the Assistant Secretary of Indian Affairs, which identified management challenges within BIA. The report concluded that BIA’s management structure was not adequate to operate an effective and efficient agency. The report recommended centralization of some administrative functions. According to BIE officials, for a brief period from 2002 to 2003, BIE was responsible for its own administrative functions. However, in 2004, in response to the NAPA study, its administrative functions were centralized under the Deputy Assistant Secretary for Management (DAS-M). More recently, in 2011, Indian Affairs commissioned another study— known as the Bronner report—to evaluate the administrative support structure for BIE and BIA. The report, issued in March 2012, found that organizations within Indian Affairs, including DAS-M, BIA, and BIE, do not coordinate effectively and communication among them is poor. The report recommended that Indian Affairs adopt a more balanced organizational approach to include, among other things, shared responsibility, new policies and procedures, better communication, and increased decentralization. administrative structural realignment intended to address the Bronner report recommendations. Bronner Group, Final Report: Examination, Evaluation, and Recommendations for Support Functions (March 2012). Students in BIE schools have performed consistently below Indian students enrolled in public schools on national assessments administered by the National Assessment of Educational Progress (NAEP) between 2005 and 2011. For example, in 2011, 4th grade estimated average reading scores were 22 points lower for BIE students than for Indian students attending public schools. For 4th grade mathematics, BIE students scored lower, on average, than Indian students attending public schools, but the gap was less than for reading scores—14 points in 2011. Figure 1 shows the trend in the estimated average 4th grade reading and math scores for students in BIE schools as compared to Indian students in public schools and the national average for non-Indian students. In 8th grade, BIE students also scored consistently lower, on average, on NAEP assessments than Indian students in public schools. However, for reading the performance gap was slightly less than it was for 4th grade. For example, in 2011, 8th grade estimated average reading scores were 19 points lower for BIE students than for Indian students attending public schools. Further, Indian students attending BIE and public schools have consistently scored lower on average than the national average of non- Indian students in 8th grade on both the math and reading NAEP assessments. Figure 2 shows the trend in estimated average 8th grade reading and math scores for students in BIE schools compared to Indian students in public schools and the national average for non-Indian students. Further, Indian students attending BIE and public schools have consistently scored lower on average than the national average for non- Indian students in 4th and 8th grades on both the math and reading NAEP assessments. Some of the difference in performance levels between Indian students and non-Indian students may be explained by factors such as poverty and parents’ educational backgrounds. For example, in 2011, larger percentages of Indian students were eligible for free and reduced-price lunch (an indicator of low family income) in both grades 4 and 8 as compared to non-Indian students. In addition, the percentage of 8th grade Indian students reporting that at least one parent had some education beyond high school was smaller than the percentages of Black, White, and Asian students. In states we visited, BIE students also consistently underperformed Indian students in public schools on state reading and math assessments in 3rd and 7th grade, over the most recent 3-year period for which data are available. Specifically, in Mississippi and South Dakota, a lower percentage of students in BIE schools scored at the proficient level or above on 3rd and 7th grade state assessments compared to Indian students in public schools. In Arizona, the difference in the performance of students in BIE schools and Indian students in public schools was less marked, with a somewhat lower percentage of students in BIE schools scoring at the proficient or above levels. Finally, students attending BIE schools had relatively low high school graduation rates compared to Indian students enrolled in public schools in the 2010-2011 school year. Specifically, the graduation rate for BIE students for the 2010-2011 school year was 61 percent—placing BIE students in the bottom half among graduation rates for Indian students in states where BIE schools are located. student graduation rates ranged from 42 percent to 82 percent. Figure 3 shows the Indian student graduation rates for BIE and states where BIE schools are located. Our analysis of graduation rates covered 20 of the 23 states where BIE schools are located. At the time of our review, cohort graduation rate data were not available for 2 of the 23 states where BIE schools are located, and we excluded data from an additional state due to the small number of students in the Indian student subgroup for which the graduation rates were calculated. BIE’s administrative and internal control weaknesses have resulted in difficulty assessing the academic progress of its students and AYP for its schools as required under ESEA. In addition, BIE has delayed critical efforts to collaborate with Education. BIE’s efforts to assess student and school performance are not consistent with internal controls standards that can help agencies operate more effectively and help ensure compliance with applicable laws and regulations. BIE officials provided inaccurate guidance to some of its schools about which student assessment to administer in the 2011-12 school year, resulting in a lack of compliance with federal requirements. In that school year, BIE offered its schools in New Mexico the ability to administer an assessment—in lieu of the state assessment—that was not approved by Education for use as an accountability tool for determining whether schools made AYP under ESEA. Education officials told us they did not approve any BIE schools to use assessments other than those required by Interior’s regulation.ESEA accountability requirements, Education has an external group of experts to evaluate the assessment. This evaluation ensures that the assessment is aligned with the state academic content and achievement standards taught in the classroom and is valid and accessible for use by the widest possible range of students—such as students with disabilities and students with limited English proficiency. However, BIE did not submit a request to Education to review the assessment through this To determine whether an assessment meets process. As a result of BIE’s guidance, 21 of BIE’s 42 schools in New Mexico administered an alternative assessment that Education was unable to use to hold schools accountable for student performance under ESEA. Further, BIE did not act within the scope of its authority when providing this option to schools. Under an Interior regulation, BIE schools are generally required to administer the same academic assessments used by the 23 respective states where the schools are located. While this regulation allows BIE schools to use alternative assessments for the purposes of ESEA, the Secretaries of Education and Interior must first provide approval. In addition to not obtaining Education’s approval of the assessment, BIE made this critical decision without the appropriate level of review by the Secretary of the Interior or his designee. This happened because BIE does not have procedures that specify who should be involved in making key decisions. As a result, BIE offered its schools a choice to use an assessment without approval from the Secretaries, which does not align with Standards for Internal Control in the Federal Government. These standards state that significant events should be authorized and executed only by persons acting within the scope of their authority. Further, the standards state that internal controls and other significant events need to be clearly documented to help management with decision making and to help ensure operations are carried out as intended. The documentation should appear in management directives, administrative policies, or operating manuals. BIE officials acknowledged that they did not obtain approval from the Secretary of Interior to allow schools in New Mexico to use the Northwest Evaluation Association (NWEA) Measures of Academic Progress rather than their state assessment as required by an Interior regulation, nor did they submit a waiver request to Education to allow them to do so. As a result, they acknowledged they did not adhere to the correct process when providing schools in New Mexico the option of which assessment they could administer. BIE also provided changing directions to its schools about what assessments they could use to assess students’ academic progress in the 2012-13 school year to comply with ESEA. For the 2012-13 school year, BIE school officials in Arizona, Mississippi, and South Dakota said that BIE directed them to administer the alternative assessment it allowed the BIE schools in New Mexico to take during the prior school year, instead of their respective state assessments. However, in late September 2012, BIE directed them to administer both their state assessment and the alternative assessment. As a result, some BIE schools, such as those in Arizona, experienced delays in obtaining materials for the state assessment. Further, school officials in Arizona, Mississippi, and South Dakota were under the impression that their schools would administer the alternative assessment used in New Mexico instead of their respective state assessments. Without clear decision- making procedures, BIE has not provided schools consistent guidance to help them develop strategies to inform instruction. BIE did not notify schools of their AYP status for the 2011-12 school year before the start of the 2012-13 school year. Specifically, BIE did not notify its schools of their AYP status for the 2011-12 school year until April and May 2013—over 6 months after the school year typically begins. According to Education officials, the impact of schools not knowing whether they made AYP depends on their performance the previous year. Some BIE and school officials we spoke with expressed concern that without this information on whether they made AYP they were unable to comply with ESEA and notify parents in a timely manner of the schools’ performance. Schools receiving ESEA Title I Part A funds face specific consequences depending on how many years they do not make AYP. Table 1 shows the remedial actions for BIE Title I Part A for schools that do not make AYP. Such actions may include replacing school staff, implementing new curricula, or appointing outside experts to advise schools. Federal internal control standards provide that information should be recorded and communicated to management and others who need it in a form and within a time frame that enables them to carry out their internal control and other responsibilities. For an entity to run and control its operations, it must have relevant, reliable, and timely communications. Without such timely communication, BIE and school officials were unable to make informed decisions about additional actions that might be needed to support educational reforms. According to BIE officials, many of their challenges informing schools of their AYP status in a timely manner stem from having to determine the performance scores and AYP status for schools in 23 different states, each with its own accountability system. In 2008, we reported that BIE officials told us that, given the work involved, it was challenging to calculate and report proficiency levels to schools before the start of the subsequent school year. Further complicating their efforts, BIE officials noted that state calculations of AYP are not crafted with BIE schools in mind. BIE officials cited Arizona as an example. According to these officials, based on Arizona’s formula for calculating AYP, BIE schools typically do not have enough students in each grade to be able to use the state’s formula. Additionally, BIE officials said that the varied locations of BIE schools make it difficult to compare academic achievement across states, address student achievement issues, and provide technical assistance. BIE officials told us they found it especially challenging in calculating AYP results for the 2011-12 school year because Education granted waivers to several states—including states where BIE schools are located—that allowed these states to change the performance targets used to assess their schools’ yearly progress (accountability systems). For example, BIE had developed a method for comparing academic achievement across states, but is no longer able to use this method to calculate achievement across states because of these Education waivers. BIE officials plan to address their challenges by transitioning to a unified accountability system which uses the same indicators to determine AYP for all BIE schools. To accomplish this, Interior must first change its regulation that generally requires BIE schools to use the assessments of the states in which they are located. Interior has begun the process for making this change, but the process could take several months to a year, according to a BIE official. This process requires Interior to undertake a negotiated rulemaking, which includes the formation of a negotiated rulemaking committee with members from the federal government and tribes served by BIE schools. In January 2013, Interior announced its intent to establish such a committee (which will recommend specific changes to the regulation) and invited tribes to nominate prospective members. In December 2012, the Departments of the Interior and Education established a memorandum of understanding (MOU), required by an Executive Order. The MOU is to, among other things, take advantage of both Departments’ expertise, resources, and facilities and address how the Departments will collaborate. The MOU created a BIE-Education Committee to facilitate communication between the two agencies. Among other goals, the BIE-Education Committee seeks to improve the academic performance of Indian students. The Committee is charged with exploring ways to promote more effective school reform efforts and build support for BIE’s efforts to monitor and enforce compliance with Education program requirements for which schools receive funding, particularly tribally-operated schools. The Committee is also to examine options to support BIE responsibilities, including the option of having Education establish conditions on the funding it provides to BIE consistent with applicable laws. BIE officials in the field who provide technical assistance to schools, as well as Education officials, noted that some tribally-operated schools need to improve their administrative and technical capacity to comply with requirements for schools receiving Education grants. Such grants include those under the Individuals with Disabilities Education Act (IDEA) that support services to students with disabilities, and ESEA. For example, a senior BIE official said the local education line offices she is responsible for supervising frequently receive questions from tribally-operated school administrators about education laws and regulations. The official commented that this situation is particularly problematic as there is frequent turnover among some tribal school administrators in her region, so the local education line offices must educate new administrators on federal education requirements whenever a change in leadership occurs. BIE officials told us that they face challenges holding some tribally- operated schools accountable for compliance with federal education laws and regulations because those schools are under the purview of the tribes. BIE’s past problems holding its schools accountable led Education to impose an ongoing corrective action plan on BIE to improve its schools’ compliance with IDEA and ESEA requirements. As a part of this plan, BIE is required to submit quarterly progress reports to Education on how it is implementing the plan, and there are special conditions on its IDEA funding, such as additional documentation requirements. Although BIE officials can provide information to tribally-operated schools about Education grant requirements, they stated that unlike with BIE-operated schools, it is difficult to compel these schools to follow the requirements. The BIE-Education Committee has not yet begun grappling with issues such as improving BIE schools’ services for students or holding schools accountable for Education funding requirements because Interior has not designated officials to serve on the committee. Consequently, as of mid July 2013 the Committee had not met although the MOU calls for it to meet at least once every three months. According to Education officials, the committee is expected to meet some time in August 2013. In the absence of committee meetings, the activities outlined in the MOU have not yet been undertaken, such as exploring ways to help BIE’s ability to monitor and enforce compliance with Education grant programs. We have previously concluded that collaboration can benefit from formal written agreements, like an MOU, but ineffective implementation of an MOU may contribute to the sporadic and limited amount of collaboration between agencies. Challenges such as a fragmented administrative structure and frequent turnover in leadership have prompted Indian Affairs to undergo an administrative structural realignment. DAS-M staff, who provide administrative services, have been detailed to BIA’s regional offices and now report to BIA regional directors. The realignment is intended to improve efficiency in the delivery of services to BIE, among others. However, the process Indian Affairs followed to develop the realignment, and its lack of a strategic plan and workforce analysis, run counter to key practices for organizational transformations, as well as principals for strategic workforce planning. Until July 2013, Indian Affairs’ DAS-M was responsible for BIE’s administrative functions, including handling school contracting needs, facilities, and budget issues. However, as we noted in our February 2013 testimony, poor communication, incompatible procedures and a lack of clear roles for BIE and DAS-M staff, and leadership turnover have hampered efforts to improve Indian education. According to school officials we interviewed, communication between Indian Affairs’ leadership and BIE is poor, resulting in confusion about policies and procedures. Working relations between BIE and DAS-M’s leadership are informal and sporadic, and BIE officials reported having difficulty obtaining timely updates from DAS-M on its responses to requests for services from schools. In addition, there is a lack of communication between Indian Affairs’ leadership and schools. For example, a high-ranking BIE official noted there are no clear procedures regarding school maintenance and facilities matters and agreed it is confusing for schools not to have a process to follow when requesting assistance concerning these matters. Additionally, BIE and school officials in all four states we visited reported that they were unable to obtain definitive answers to policy or administrative questions from BIE leadership in Washington D.C. and Albuquerque. For example, school officials in one state we visited reported that they requested information from BIE’s Albuquerque office in the 2012-13 school year about the amount of IDEA funds they were due to receive. The Albuquerque office subsequently provided them three different dollar amounts. The school officials were eventually able to obtain the correct amount of funding from their local education line office. Similarly, BIE and school officials in three states reported that they often do not receive responses from BIE’s Washington D.C. and Albuquerque offices to questions they pose via email or phone. Further, one BIE official stated that meetings with BIE leadership are venues for conveying information from management to the field, rather than opportunities for a two-way dialogue. BIE schools have encountered delays in contracting due to DAS-M’s lack of knowledge about the needs of schools and the laws and regulations regarding educational institutions. BIE does not have a specific contracting team assigned to it, although the contracting needs of schools are different than those of a federal agency. Purchasing items for schools in a timely manner, for instance, is critical to ensure that all supplies and textbooks are delivered before the start of the school year. However, DAS-M’s procurement process has caused delays in textbook delivery to some schools. In another instance, DAS-M processes led to the termination of a contract held by an experienced speech therapist serving a BIE school in favor of a less expensive contract with another therapist. However, the new therapist was unable to travel to the schools being served to provide therapy to students. As a result, the schools were unable to implement students’ individualized education programs in the timeframe required by IDEA. In addition, although BIE accounts for approximately 34 percent of Indian Affairs’ budget, several BIE officials reported that improving student performance is often overshadowed by other Indian Affairs priorities. DAS-M staff’s focus on supporting other offices within BIA, such as the Office of Trust Services, hinders staff from seeking and acquiring expertise in education issues. Leadership turnover in the Office of the Assistant Secretary for Indian Affairs, DAS-M, and BIE has exacerbated the various challenges created by administrative fragmentation. For instance, the tenure of acting and permanent assistant secretaries in Indian Affairs has ranged from 16 days to 3 years, and the post was vacant from August 2003 through February 2004 (see fig. 4). In previous reports about other agencies, we found that frequent changes in leadership may complicate efforts to improve student achievement, and that lack of leadership negatively affects an organization’s ability to function effectively and to sustain focus on key initiatives. Indian Affairs underwent an administrative structural realignment on July 1, 2013, and DAS-M administrative staff responsible for BIE administrative functions have been detailed to BIA’s regional offices based on their current geographical duty station. These DAS-M staff will now be reporting to BIA regional directors, who will have authority over most of BIE’s administrative functions, including acquisitions, budget, facilities management, financial management, and property. DAS-M will continue to be responsible for information technology and human resource functions. In addition, DAS-M will continue its responsibilities overseeing and monitoring BIE activities, including updating its policies and procedures and providing technical assistance to administrative staff in the field. Indian Affairs’ recent realignment—approved by the cognizant congressional committees in late May 2013—is intended to improve efficiency in the delivery of services to Indian Affairs stakeholders, including BIE schools. According to information we received from Indian Affairs in June 2013, the provision of administrative functions by BIA regional offices would be governed by service-level agreements specifying the services provided to BIE and the responsible parties based on BIE’s needs. However, although Indian Affairs officials told us these agreements would be signed and in place before the realignment took effect, the agreements were not in place as of late July 2013. The process Indian Affairs followed to develop the realignment plan is unclear and Indian Affairs did not consult BIE officials on the specific changes outlined in the realignment request it submitted to Congress. For example, it did not consult with BIE on transferring the responsibilities for most of BIE’s administrative functions to BIA regional offices. Additionally, although Indian Affairs informed Congress that the realignment would be overseen by an Executive Implementation Oversight Board, several senior BIE officials, including an acting BIE Director, reported that they were not asked for input into the new plan. Indian Affairs officials acknowledged that their office had not consulted with BIE officials on potential organizational changes since before the Bronner report was issued in March 2012. The Bronner report recommended that Indian Affairs, among other things, develop new policies and procedures and increase decentralization, but does not address the specific changes entailed in the realignment. Key practices for organizational transformation include employee involvement in organizational change to help create the opportunity to increase employees’ understanding and acceptance of organizational goals and objectives, and gain ownership for new policies and procedures. Such involvement also allows employees to share their experiences and shape policies. In addition, while Indian Affairs conducted tribal consultations in April and May 2012 on the findings of the Bronner report, Indian Affairs did not formally consult with tribes on the specific changes entailed in the realignment before it took effect on July 1, 2013. Indian Affairs’ main method of obtaining information from tribes is through tribal consultations. In addition to implementing its realignment without seeking input from key stakeholders, Indian Affairs’ leadership does not appear to have broadly communicated information about the realignment to BIE schools or BIE officials in the field. For example, although the realignment was already in effect, two BIE school administrators and the director of a BIE education line office with responsibility for a large number of schools told us on July 8, 2013 that they had no knowledge of the realignment. Another BIE school administrator said she had just recently found out about it. As a result, these school administrators were unaware that BIA regional offices, rather than DAS-M, would be responsible for carrying out their administrative functions, including acquisitions, budget, and financial management. This information is important to school administrators to have because they have experienced problems with administrative services, such as acquisition of textbooks. According to one high-ranking BIE official, improvements under the realignment are not likely, in part, because there is little communication between BIE and BIA regional office officials. These reactions may stem from the insufficient involvement of BIE and school officials in planning for the realignment and could undermine support for the change. Indian Affairs officials acknowledged that the office has not established a strategic plan with specific goals and measures for itself or for BIE or a strategy for communicating with stakeholders. Key practices for organizational transformation suggest that effective implementation of a results-oriented framework, such as a strategic plan, requires agencies to clearly establish performance goals for which they will be held accountable, measure progress towards those goals, determine strategies and resources to effectively accomplish the goals, and use performance information to make the decisions necessary to improve performance. In addition, communicating information early and often helps to build an understanding of the purpose of planned changes and builds trust among employees and stakeholders. Although Interior as a whole has a strategic plan, BIE’s inclusion in the Interior plan consists of two performance measures to improve Indian education, but it does not detail how BIE will achieve these goals. Indian Affairs and BIE officials were unable to provide us with a more specific plan articulating the strategies they will use to achieve BIE’s mission of improving education for Indian students. BIE officials commented that a strategic plan would help BIE leadership and staff pursue goals in a consistent manner and collaborate to achieve them. Key practices for organizational transformation also suggest that performance goals and measures are an important part of the strategic planning process. Specifically, performance measures allow an organization to demonstrate its progress toward meeting performance goals. Performance goals and measures are part of a broader system that creates a “line of sight” showing how team, unit, and individual performance can contribute to overall organizational results. Interior’s fiscal year 2008 Performance and Accountability Report lists two performance measures that relate to BIE school construction and nine measures for BIE relating to individual student performance, cost per student, and teacher qualifications. However, there are no measures for how internal departments, such as DAS-M and BIA, are fulfilling their responsibilities to provide administrative and facilities support to BIE schools. In a response to questions from Congress, Interior stated that its realignment plan requires the development and execution of performance measures for the delivery of administrative support functions, but it is not yet clear what specific measures will be adopted because the service level agreements between BIE and BIA regional offices have not yet been negotiated. Without such performance measures, BIE and BIA staff cannot be held accountable for meeting agency goals. According to key principles for workforce planning, another element of an effective strategic plan is a clear strategy for maintaining and improving Key principles for effective strategic workforce an agency’s workforce. planning include: (1) aligning an organization’s human capital program with its current and emerging mission and programmatic goals and (2) developing long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. The appropriate geographic and organizational deployment of employees can further support organizational goals and strategies. Effective deployment strategies can enable an organization to have the right people, with the right skills, doing the right jobs, in the right place, at the right time. While Indian Affairs’ most current workforce plan is for 2008-2013, it was based on workforce data from fiscal years 2003-2007. Therefore, the data in the plan would not reflect recent changes in the workforce or current workforce needs. As a result, Indian Affairs’ recent realignment may result in an imbalance in BIA regions’ workload in supporting BIE schools. For example, under the realignment, 11 of the 12 BIA regional directors are responsible for providing administrative support to BIE schools in their However, BIE schools are unevenly distributed among the 11 regions. BIA regions, with the regions containing between 2 and 65 schools (see table 2). Therefore, it is important to ensure that each BIA regional office has an appropriate number of staff who are familiar with education laws and regulations and school-related needs to be prepared to support the schools in that region. In addition, DAS-M records show that almost 250 employees in BIE and BIA left the agency in recent months using the Voluntary Early Retirement Authority and Voluntary Separation Incentive Payments offered by Interior, which agency officials told us were prompted by the sequestration. Indian Affairs recent realignment as well as its employee early-out and buy-out efforts in the first half of 2013 highlights the importance of workforce planning. Staff departures can affect organizational capacity, and in this instance, may affect implementation of the realignment. Current information on attrition rates and geographic and demographic trends could be used to estimate the number of employees with specific skills and competencies and the bureau’s staffing needs going forward. Further, a revised workforce plan could focus on the strategic deployment of staff with educational expertise to regions with a large number of BIE schools and education-specific training for DAS-M and BIA staff with responsibilities in support of Indian education. Further, some tribes are planning to convert BIE-operated schools to tribally-operated schools in the future. For example, the Navajo Nation intends to convert its remaining 31 BIE-operated schools to tribally- operated schools. Should this occur, only 15 percent of BIE schools would remain BIE-operated. While such a shift would decrease BIE’s administrative responsibilities, BIE’s oversight, monitoring and technical assistance responsibilities would remain. We have previously reported that it is essential that agencies determine the skills and competencies that are critical to achieving their missions and goals, especially when factors change the environment within which agencies operate. The federal government, through the Department of the Interior, has a trust responsibility for the education of Indian students. However, the extent to which Interior is effectively meeting its responsibilities is questionable considering students’ relatively poor academic performance and BIE’s myriad administrative and management challenges. BIE lacks clear procedures for decision-making, which has resulted in it acting outside the scope of its authority, undermining school officials’ ability to assess student performance under ESEA and potentially affecting their compliance with federal regulations. Further, while Indian Affairs has reported the benchmark for the quality of education BIE provides to its students is whether its schools meet AYP goals, BIE’s ongoing challenges calculating AYP and reporting this information to schools may exacerbate poor school performance. Also, given the significant challenges BIE and Indian schools face in improving student academic performance, it is critical that Interior leverage existing resources and opportunities to improve communication. For example, as of late July 2013, Interior had not yet appointed members to the BIE-Education Committee. Unless BIE collaborates with partner agencies and provides schools information that affects student instruction in a timely and consistent manner, it will be difficult for BIE to be well-positioned to improve student academic performance in the future. While Indian Affairs has undertaken another realignment of its administrative functions, it is unclear to what extent, if at all, the changes will result in improved services for BIE and schools. For instance, Indian Affairs implemented the realignment without seeking input from a broad range of stakeholders. Further, it did not develop a strategic plan with specific goals and measures for itself or BIE or strategies to achieve these goals. In addition, it has not updated its workforce plan or assessed Indian Affairs’ realignment and its impact on BIE to ensure it has the right people in place with the right skills to effectively meet the needs of BIE schools. In addition, BIE did not develop a strategy for communicating key decisions to stakeholders, including schools. Rather than contribute to improved administrative functions, the lack of planning and communication efforts may ultimately undermine them. Therefore, undertaking these steps as well as developing a comprehensive, systematic approach to providing technical assistance would help improve school officials’ administrative capacity and technical expertise, especially related to BIE compliance with Education requirements. We recommend that the Secretary of the Interior direct the Assistant Secretary-Indian Affairs to take the following five actions: Develop and implement decision-making procedures for BIE that specify who should be involved in the decision-making process for key decisions that affect BIE and its schools to ensure that BIE has effective management controls, is accountable for the use of federal funds, and comports with federal laws and regulations. Such procedures should be clearly documented in management directives, administrative policies, or operating manuals. Develop a communication strategy for BIE to inform its schools and key stakeholders of critical developments that impact instruction in a timely and consistent manner to ensure that BIE school officials receive information that is important for the operation of their schools. Appoint permanent members to the BIE-Education committee and ensure that the committee meets quarterly as required by the MOU to improve collaboration between BIE and Education and address the challenges that Indian schools face in improving student performance. Develop a strategic plan that includes detailed goals and strategies for BIE and for those offices that support BIE’s mission, including BIA, to help Indian Affairs effectively implement its realignment. Development of the strategic plan should incorporate feedback from BIE officials and other key stakeholders. To gather stakeholder input, we recommend that the plan include a comprehensive communications strategy to improve communication within Indian Affairs and between Indian Affairs and BIE staff. Revise its strategic workforce plan to ensure that employees providing administrative support to BIE have the requisite knowledge and skills to help BIE achieve its mission and are placed in the appropriate offices to ensure that regions with a large number of BIE schools have sufficient support. We provided a draft of this report to the Departments of Interior and Education for review and comment. Education chose not to provide comments. Interior’s comments are reproduced in appendix I. Interior concurred with all of our recommendations. Interior stated that the report’s findings and recommendations will aid its efforts to move forward with improving the quality of education in Indian country. For example, Interior noted that it is obtaining advice from Education and other subject- matter experts on how to improve BIE’s structure and systems. Interior agreed with our recommendation about the need to develop and implement procedures for key decisions affecting BIE to ensure it has effective management controls and makes decisions that comport with federal laws and regulations. Interior said that as part of Indian Affairs’ realignment, it will need to refine and redefine some of the roles and responsibilities of BIE and BIA and noted it is currently finalizing a plan driven by these changes. To reflect changes in the roles of BIE and BIA as a result of the realignment, Interior acknowledged the importance of updating its policy documents, such as departmental manuals. In addition to these actions, we believe it is important for Interior to review existing policies to understand what additional controls are needed to ensure BIE activities comply with relevant laws and regulations. Interior agreed with our recommendation to develop a communication strategy. Interior said that BIE should develop and follow set communications protocols within its schools and field offices to ensure that each entity understands its roles and responsibilities. In response to our recommendation to appoint permanent members to the BIE- Education Committee, Interior stated that it had appointed members to the Committee in late July 2013 and that the first Committee meeting was held in mid-August 2013. As Interior and Education move forward with the Committee, we believe it is important to ensure that it meets on a regular basis and that the Departments evaluate and monitor the extent to which it is achieving its stated goals. In response to our recommendation that BIE develop a strategic plan, Interior stated that it has established a committee—composed of key Indian Affairs offices—to develop a comprehensive communication plan for BIE. Interior plans to seek input from key stakeholders on the plan, including the States and tribal Departments of Education. In addition these actions, we believe it is important that this plan include specific goals and measures for Indian Affairs and BIE that align to BIE’s mission of improving the quality of Indian students’ education. Lastly, in response to our recommendation on the need for strategic workforce planning, Interior stated it had completed a workforce plan that served as the foundation for the current Indian Affairs’ realignment. However, the workforce plan was based on workforce data from fiscal years 2003-2007 and would not reflect recent changes in the workforce or current workforce needs. As a result, we believe it is important to revise the strategic workforce plan to reflect current staffing needs. We are sending copies of this report to relevant congressional committees, the Secretaries of Interior and Education, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or scottg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Elizabeth Sirois (Assistant Director), Ramona L. Burton, Sheranda Campbell, Rachel Miriam Hill, and Matthew Saradjian made key contributions to this report. James E. Bennett, Holly A. Dye, Alexander G. Galuten, Sheila R. McCoy, Jean L. McSween, Kathy Peyman, Vernette G. Shaw, and Sarah E. Veale provided support.
In 2012, the federal government provided over $850 million to 185 BIE schools that serve about 41,000 Indian students living on or near reservations. BIE is part of Indian Affairs within the Department of the Interior, and BIE's director is responsible for managing education functions at all BIE schools. BIE's mission is to provide quality education opportunities to Indian students. GAO was asked to study the extent to which BIE is achieving its mission. GAO examined (1) how student performance at BIE schools compares to that of public school students; (2) what challenges, if any, BIE schools face assessing student performance; and (3) what management challenges, if any, affect BIE and its mission. For this work, GAO reviewed agency documents and relevant federal laws and regulations; analyzed student assessment data from 2005-2011; and conducted site visits to BIE schools and nearby public schools in four states based on location, school and tribal size, and other factors. Students in Bureau of Indian Education (BIE) schools perform consistently below Indian students in public schools on national and state assessments. For example, based on estimates from a 2011 study using national assessment data, in 4th grade, BIE students on average scored 22 points lower for reading and 14 points lower for math than Indian students attending public schools. The gap in scores is even wider when the average for BIE students is compared to the national average for non-Indian students. Additionally, the high school graduation rate for BIE students in 2011 was 61 percent, placing BIE in the bottom half among graduation rates for Indian students attending public schools in states where BIE schools are located. BIE's administrative weaknesses have resulted in it experiencing difficulty assessing the academic progress of its students and adequate yearly progress (AYP) for its schools as required by federal law. Department of the Interior (Interior) regulations generally require BIE schools to administer the same academic assessments used by the 23 respective states where the schools are located. However, in the 2011-12 school year, at the direction of BIE officials, 21 schools did not administer their state assessment. These schools administered an alternative assessment that had not been approved for assessing AYP. BIE made this critical decision without the appropriate level of review at Interior or the Department of Education (Education) because it does not have procedures specifying who should be involved in making key decisions. Further, BIE did not provide its schools their AYP status for the 2011-12 school year prior to the start of the next school year, hindering school officials' ability to develop appropriate strategies to improve student performance. Unless BIE provides schools information that affects student instruction in a timely and consistent manner, it will be difficult for BIE to be well-positioned to improve student academic performance in the future. Fragmented administrative services and a lack of clear roles for BIE and Indian Affairs' Office of the Deputy Assistant Secretary for Management (DAS-M)--that until July 2013 was responsible for BIE's administrative functions--contributed to delays in schools acquiring needed materials, such as textbooks. In July, Indian Affairs underwent a realignment, which assigned another office in Indian Affairs the responsibility for most of BIE's administrative functions. The realignment is intended to improve efficiency in delivering services to Indian Affairs stakeholders, including BIE schools. However, it is unclear to what extent, if at all, the changes will result in improved services for BIE schools. For example, Indian Affairs had not conducted a recent analysis before implementing the realignment to determine if it has the right people in place with the right skills doing the right jobs. Such workforce planning is critical given Indian Affairs' recent realignment and employee buy-out and early-out initiatives. Similarly, Indian Affairs has not developed a strategic plan with specific goals and measures for itself or BIE, or a strategy for communicating with stakeholders. Such a strategic workforce plan and performance measures could help improve operations and align the organization's human capital program with its current and emerging mission and programmatic goals. Among other things, GAO recommends that Indian Affairs develop and implement decision-making procedures and a communications protocol to ensure that BIE has effective management controls and comports with federal laws and regulations. To improve BIE's management of its schools, GAO also recommends that Indian Affairs develop a strategic plan that includes goals and measures for BIE and a revised strategic workforce plan. Interior concurred with all of our recommendations.
In 1989, the Congress established the National Commission on Severely Distressed Public Housing to explore the factors contributing to structural, economic, and social distress; identify strategies for remediation; and propose a national action plan to eradicate distressed conditions by the year 2000. In 1992, the Commission reported that approximately 86,000, or 6 percent, of the nation’s public housing units could be considered severely distressed because of their physical deterioration and uninhabitable living conditions, increasing levels of poverty, inadequate and fragmented services reaching only a portion of the residents, institutional abandonment, and location in neighborhoods often as blighted as the sites themselves. Although the Commission did not identify specific locations as severely distressed, it recommended that funds be made available to address distressed conditions and that these funds be added to the amounts traditionally appropriated for modernizing public housing. In response to the Commission’s report, the Congress, through appropriations legislation, created the HOPE VI demonstration program to provide a more comprehensive and flexible approach to revitalizing distressed urban communities. Through a combination of capital improvements and community and support services, the program seeks to (1) transform public housing communities from islands of despair and poverty into vital and integral parts of larger neighborhoods and (2) create environments that encourage and support the movement of individuals and families toward self-sufficiency. HUD’s Office of Urban Revitalization within the Office of Public and Indian Housing manages the HOPE VI program. In addition, HUD’s Office of Public Housing Partnerships advises housing authorities on leveraging opportunities. HUD has hired three consulting firms to help housing authorities establish community and support services. In 1997, the Department also began hiring contractors, primarily KPMG Peat Marwick, to develop management systems for HUD and to help housing authorities oversee HOPE VI revitalization sites. To select housing authorities for participation in the HOPE VI program, HUD publishes a notice of funding availability (NOFA) setting forth the program’s current requirements and available funds. Housing authorities then prepare applications from which HUD selects those that best satisfy the notice’s requirements and signs grant agreements that, in the absence of regulations, serve as contracts with the housing authorities. Each grantee then submits a revitalization plan to HUD for approval; this plan incorporates a budget and schedule for implementing the grantee’s HOPE VI capital improvements and community and support services. After approving the revitalization plan, HUD gives the grantee access to funding from the Treasury. Progress in completing capital improvements and implementing community and support services varies at HOPE VI sites. Although the rate of spending for capital improvements has increased, the vast majority of the grant funds have yet to be disbursed. While the planned capital improvements were not complete at any of the HOPE VI sites as of June 1, 1998, residents were living in rehabilitated or newly constructed units at 11 sites. All of the housing authorities have completed or are developing plans for community and support services. Generally, housing authorities have not spent as much as the program allows for these services. HUD has established measures of performance for capital improvements and has begun to collect baseline data for use in measuring the results of community and support services. Although limited to date, the pace of spending for HOPE VI sites has accelerated. Figure 1 shows cumulative grant levels, obligations, and disbursements as of the beginning of each fiscal year and as of May 1998. It also shows that while the program was established in fiscal year 1993, grant money was not available to housing authorities until fiscal year 1995. Because almost all of the sites funded in fiscal years 1996 and 1997 are still in the planning stages, virtually all the disbursements through March 1998 were for the 39 sites funded during the program’s first 3 fiscal years (1993 through 1995). Because of the time lag between planning and construction and many site-specific delays, disbursements during fiscal year 1997 and the first half of fiscal year 1998 ($302 million) were more than twice as high as disbursements during fiscal years 1995 and 1996 ($138 million). But as of March 1998, 73 percent of the grants awarded during the first 3 fiscal years remained to be disbursed ($1.1 billion). At the 10 sites we visited, 31 percent of the grant awards had been disbursed. Figure 2 shows the spending activity, as of May 1998, at the 10 selected sites. The sites nearing completion—Centennial Place in Atlanta, Hillside Terrace in Milwaukee, and Kennedy Brothers Memorial in El Paso—have expended the majority of their HOPE VI grants. In contrast, Boston’s Mission Main and Chicago’s Robert Taylor B sites do not have HUD-approved revitalization plans and do not expect to begin construction before late 1998 and 2000, respectively. New York City has expended only $1.1 million of the $68.6 million awarded for the Arverne and Edgemere HOPE VI sites, two adjoining sites that plan to combine two separate grants into one HOPE VI effort. Almost half of the disbursement is from a $500,000 planning grant awarded in fiscal year 1993 and spent on the planning to revitalize a different public housing site in the neighborhood. However, after reaching an impasse with the tenants’ association at the original site, the housing authority shifted the implementation grant to Edgemere in December 1996. HUD officials do not expect rehabilitation at the Arverne and Edgemere sites to begin before late 1999. (App. II describes each site we visited.) Although capital improvements have begun at most of the 39 HOPE VI sites funded from fiscal year 1993 through fiscal year 1995, very few are near completion. According to HUD’s data, at 11 sites, some residents have moved into newly constructed or rehabilitated units, and at 23 sites, demolition, rehabilitation, or new construction has started. There have been no capital improvements at the five remaining sites. At several sites, deteriorated high-rise and mid-rise buildings are being demolished and replaced with lower-density structures. In addition, infrastructure improvements, such as new street patterns, are breaking down the physical barriers that isolated many HOPE VI sites from the neighboring communities. At some sites, mixed-income communities have replaced concentrations of poverty, and new community centers, together with plans for police stations, schools, and shopping districts, are helping to integrate the sites with neighboring areas. At 8 of the 10 sites we visited, both demolition and new construction or rehabilitation have begun. The results of the capital improvements at some of the sites were dramatic: At Centennial Place in Atlanta, nearly all of the over 1,000 original units were demolished and replaced with a mixture of subsidized and market-rate units of equally high quality. In addition, three of the site’s original structures were rehabilitated for historic reasons. Street patterns were reworked to reflect the grid pattern found elsewhere, thereby helping to integrate the site with the rest of the city. (See fig. 3.) At Chicago’s Cabrini Homes Extension, after 3 years of delays, four of eight high-rise buildings have been demolished as planned, and new row houses, duplexes, and mid-rise buildings are under construction or have been completed on adjacent property. (See fig. 4.) Of the planned new units, 30 percent are reserved for Cabrini families, 20 percent for moderate-income families, and 50 percent for households paying market rates. At Hillside Terrace in Milwaukee and Kennedy Brothers Memorial in El Paso, the capital improvements consisted primarily of rehabilitation. However, some units were demolished at both sites to reconfigure the streets, not only to provide easier access for residents and public services but also to discourage gang-related drug traffic. Green spaces were substituted for concrete at both sites, and street lights and walkways were installed to match those of the surrounding neighborhoods. In addition, community centers were expanded. In El Paso, neighbors who had asked that the brick wall around the HOPE VI site be built higher to block out the public housing community agreed with the site plan, which proposed to demolish the wall and replace it with a see-through wrought iron fence. (See fig. 5.) Most of the community and support services at HOPE VI sites are designed to provide residents with employment training and opportunities to become more self-sufficient. When calculated on a per-unit basis, the funding for these services has decreased since the program’s early years. However, the decline in funding may not have much impact on the services, and most sites are working on plans to sustain the services after the grants run out. For the grants awarded from fiscal year 1993 through fiscal year 1996, up to 20 percent of the grant funds could be spent for community and support services. However, according to HUD’s data, at the 39 sites that received funding during this period, only about 12 percent of the total grant funding was budgeted, on average, for community and support services. Figure 6 shows the budget and spending for community and support services, as of April 1998, at the 10 sites we visited. Starting with the grants awarded in fiscal year 1997, HUD changed the allocation for community and support services from up to 20 percent of the grant funds to up to $5,000 per unit. The net effect of this change was to lower the total amount available for such services. HUD program officials said that they are not concerned about the reduction in funding for community and support services because neither the original nor the revised guidelines would provide enough support to maintain the services over the long term. Housing authority officials we spoke with also expressed little concern about the decrease because most housing authorities were not spending to the limit and were attempting to build self-sustaining programs. The HOPE VI sites we visited were aware of the need to sustain their community and support service programs after their HOPE VI funds run out. For example, Atlanta’s Centennial Place has created a full-time position for a staff member to focus on fund-raising and collaborating with local agencies to sustain the programs developed under HOPE VI. Other sites, such as Orchard Park in Boston and Pico Aliso in Los Angeles, had service organizations in the neighborhood before the HOPE VI program started, and their HOPE VI money has provided these organizations with facilities and equipment to offer more comprehensive services. A March 1998 HUD Inspector General’s report expressed concern that Kennedy Brothers Memorial in El Paso will not have enough funds to support its current plans for community and support services unless it leverages additional public and private support. The El Paso housing authority’s executive director responded by terminating a number of support service contracts and developing a plan with a community development corporation to initiate volunteer programs and fund-raising efforts to sustain operations after the HOPE VI grant runs out. At the 10 sites we visited, we observed a variety of community and support services designed to improve the lives of the residents. These services included community businesses sponsored through the HOPE VI program; job placement services and entrepreneurial training programs; technology learning centers affording access to computers; day care and health care centers; and Boys and Girls Clubs providing after-school activities. For example, Centennial Place in Atlanta and Cabrini Homes Extension in Chicago have community programs designed to provide residents with the tools needed to launch their own businesses and become self-sufficient. Both Pico Aliso in Los Angeles and Cabrini Homes Extension have silk-screening companies training residents and providing them with experience in working together in a productive environment. Homeboy Industries, the Pico Aliso silk-screening business, is a profitable venture bringing residents from different gangs together to work. At Hillside Terrace in Milwaukee and Kennedy Brothers Memorial in El Paso, residents are providing voluntary community services, such as Neighborhood Watch. Kennedy Brothers also hired an off-duty police officer to patrol the site, and residents are removing graffiti to keep the site clean. Moreover, according to the El Paso housing director, residents at other public housing sites are aware of the social successes at the HOPE VI site and are trying to duplicate them in their communities through volunteer programs such as Neighborhood Watch. Measures of outcomes are important for tracking the success of the HOPE VI program. HUD’s Office of Policy Development and Research is conducting a three-phase, 10-year evaluation of conditions at HOPE VI sites. The first phase, completed in August 1996, contained baseline data on conditions at 15 sites funded in the first year of the program, historical descriptions of the distressed housing at these sites, and planned revitalization activities. The second phase, expected to begin in the summer of 1998, will assess conditions at the sites as they are reoccupied. The third phase will evaluate conditions at the sites 3 to 5 years after they have been reoccupied. HUD’s performance plan, required under the Government Performance Results Act of 1993, lists performance indicators for capital improvements at HOPE VI sites, but not for community and support services. HUD has asked the housing authorities to develop such measures and has requested baseline data from all HOPE VI grantees for use in measuring the outcomes of community and support services. HUD’s performance indicators for HOPE VI capital improvements include the number of units demolished, rehabilitated, and replaced. The replacement units include newly constructed public housing units and units obtained through Section 8 certificates, which provide rental assistance to private landlords on behalf of low-income households. HUD has contracted with KPMG Peat Marwick to gather data on the progress of capital improvements at the HOPE VI sites. (See app. III for a chart with information on the status of demolition and unit revitalization for all sites.) To create a baseline for measuring the results of community and support services, HUD asked all HOPE VI grantees for data on employment, economic development, job training, education, community building, homeownership, crime reduction, and other social issues. HUD officials told us that they believe it is important to have evaluative measures to justify their expenditures for these services. HUD’s effort to collect baseline data should be a first step toward developing consistent national data on the outcomes of HOPE VI community and support services. Progress at HOPE VI sites has varied for interrelated reasons associated with conditions at the selected sites, the origin of the program, the types of capital improvements selected, and the types of funding used. These sites have to overcome complex structural, social, and management challenges that require time to resolve. Legal issues and legislative and administrative changes to the program’s requirements have also added time to developments. In general, rehabilitation has taken less time than demolition and new construction, especially when new construction has reduced a development’s density and entailed permanent relocation for some residents. The use of leveraged financing has also introduced time-consuming requirements for coordinating the different funding sources’ procedures and schedules. These factors have sometimes acted in combination to delay HOPE VI developments. HOPE VI sites generally pose extraordinary physical and social challenges. The selected sites exhibit conditions that the HOPE VI program was designed to reverse, including physical deterioration, uninhabitable living conditions, high rates of crime and unemployment, and isolation from the surrounding community. For example, New York’s Far Rockaway neighborhood, containing both the Arverne and the Edgemere sites, has been a candidate for major urban renewal funding for over 20 years. However, the city has not committed funding to make major investments in such an isolated location, according to housing authority officials. The $25 million HOPE VI grant at Robert Taylor Homes B in Chicago is for demolishing the first 5 of 16 high-rise buildings and purchasing or building a limited number of replacement units in the surrounding neighborhood. The Chicago Housing Authority estimates that it will take 10 years to vacate the Robert Taylor development, where the unemployment rate is over 90 percent, and bring back an economically viable neighborhood. At Los Angeles’ Pico Aliso site, relocation took longer than planned, since many residents could not be moved as anticipated because the areas were controlled by gangs. Despite the physical and social challenges they pose, many HOPE VI sites are located close to city centers, making them attractive to investors. At the same time, residents at some sites have viewed investors’ interest with suspicion, fearing that they will lose their homes to upscale development. In some instances, housing authorities have been able to allay residents’ concerns and proceed with capital improvements; in other instances, the residents’ concerns have delayed redevelopment. Atlanta’s Centennial Place, conveniently located within walking distance of downtown Atlanta, attracted funds from the city, private lenders, and community service providers, all of whom considered the site a desirable investment. However, residents fearing displacement initially opposed the housing authority’s revitalization plan, which called for reducing the development’s density and replacing only one in three public housing units. Eventually, the residents agreed to the plan when the executive director promised to allow those who remained in good standing (i.e., paid their rent and respected the housing authority’s rules) to return to the site. Both Mission Main in Boston and Cabrini Homes Extension in Chicago are desirably situated near city centers and have attracted private development funds, but residents’ concerns about the motives of the housing authorities and of the developers have, in both cases, delayed development. At Cabrini Homes Extension, where many residents see a leveraging plan as an attempted land grab by developers, the residents are suing the Chicago Housing Authority to ensure their right to return to the site after the capital improvements have been completed. HOPE VI sites also pose exceptional management challenges. Under the selection criteria that the Congress established for the program’s first 3 years, the housing authorities applying for funding had to be (1) located in the 40 most populous U.S. cities, as defined on the basis of data from the 1990 census, or (2) included on HUD’s list of troubled housing authorities as of March 31, 1992. Of the 24 housing authorities included on that list, 17 received at least one HOPE VI implementation grant during the program’s first 3 years. In total, these 17 housing authorities received 21 (55 percent) of the grants awarded to 39 sites from fiscal year 1993 through fiscal year 1995. Furthermore, at over half of these 39 sites, major changes in senior management occurred after the grant was awarded. Management turnover limits progress because time is needed to replace and reorganize staff and allow staff to learn their new duties and build relationships and trust with the community. At 16 of the HOPE VI sites, either HUD or the courts have intervened in the housing authority’s management. According to HUD officials, the primary reason for intervention was the inability of housing authorities to manage and implement the program. Management problems were particularly acute at the Washington, D.C., and Chicago housing authorities. In Washington, D.C., the housing authority was so troubled that HUD made its grant award contingent upon the appointment of an alternative administrator. This appointment took about 6 months from the time the grant was awarded, and submitting a revised revitalization plan to HUD took another 5 months. In Chicago, where HUD took over the housing authority’s operation after the housing board resigned, more than 2-1/2 years elapsed before the revitalization plan for Cabrini Homes Extension was revised and HUD’s approval of the plan was obtained. The HOPE VI program’s origin has created legal challenges and encouraged legislative and administrative changes that have further delayed sites’ progress. The program’s origin has also influenced HUD’s assessment of the program’s priorities and staffing needs. Unlike most public housing programs, which are authorized under the U.S. Housing Act of 1937, as amended, and operate under nationally applicable implementing regulations, the HOPE VI program was created and has been modified through appropriations legislation. Rather than develop implementing regulations that would be difficult to modify with each legislative change, HUD has incorporated the program’s legislative requirements into periodic notices of funding availability and into the grant agreements that it signs with individual housing authorities. The evolution of the HOPE VI program’s requirements is summarized in figure 7. The HOPE VI program’s establishment through the appropriations process raised legal issues that had to be resolved before HUD could fully implement the program. In the absence of regulations, the grant agreements serve as contracts between HUD and the housing authorities overseeing the HOPE VI sites. According to a HUD official, HUD took 8 months after sending out the letters announcing the fiscal year 1993 awards to finalize the first grant agreements, primarily because it was creating regulatory documents, not merely specifying the grants’ conditions. Other legal issues arose in 1995, when HUD began encouraging the housing authorities to use their federal grants to leverage private funds for redevelopment. HUD took 8 months to develop new regulations on the use of both public and private funds to finance public housing sites. Annual legislation has affected the HOPE VI program, and HUD has amended the grant agreements and the program’s guidance to reflect its interpretation of these changes. For example, until the Rescissions Act was passed in July 1995, HOPE VI sites were subject to a rule requiring the replacement of every unit removed from service. Although demolition was an option, housing authorities rarely availed themselves of it when every unit that they tore down had to be replaced. After the Rescissions Act suspended the one-for-one replacement rule, HUD began encouraging housing authorities to consider demolition as a way of reducing a site’s density. HUD also interpreted the 1996 appropriations legislation as adding demolition to HUD’s funding criteria. HUD, therefore, required each housing authority to demolish at least one building. Eventually, HUD concluded that demolition was not always required. These legislative and administrative changes and HUD’s interpretation of them affected progress at the sites we visited. For example, the Los Angeles housing authority welcomed the suspension of the one-for-one replacement rule and took advantage of it to revise its plan for the Pico Aliso site to include demolition and new construction. This revision, however, added about 15 months to the process, according to Los Angeles officials. In New York City, issues raised by HUD’s interpretation of the 1996 appropriations legislation as requiring demolition added months to the planning phase at the Arverne and Edgemere sites, where residents opposed the housing authority’s plan to satisfy the demolition requirement by removing the top four floors of three buildings at the site, thereby removing the equivalent of an entire building. The types of capital improvements selected have influenced the pace of development. Rehabilitation, which requires less change than demolition and new construction, has generally proved less controversial and taken less time. Plans to build off-site, reduce a development’s density, and/or permanently relocate residents have encountered more opposition from residents. For example, in Milwaukee, existing units were, for the most part, rehabilitated at Hillside Terrace. Although residents were required to move while the work was going on, most will be able to return when it is completed. Capital improvements at the site have encountered little opposition and are progressing on schedule. Conversely, Boston’s Mission Main planned to exchange some of its land for adjacent land owned by a local university in order to build a portion of the rental and homeownership units on the new site, but it could not reach agreement with the university. After a year of unsuccessful negotiations, the mayor vetoed the deal in March 1998 to preclude further delays. Using grant funds to leverage other public and/or private financing for development is more complex than relying on grant funds alone and may take longer. Combining funds from different sources requires adhering to and coordinating the different sources’ procedures and schedules, sometimes causing delays. In addition, many housing authorities lack the experience necessary to negotiate leveraged financing arrangements. However, as discussed in the next section of this report, leveraging has many benefits. The steps involved in using one other funding source—low-income housing tax credits—illustrate the complexity of leveraged financing. To obtain tax credits, which attract private equity for development, a housing authority must submit an application to the state and compete with other developers for the credits. The application must identify all of the proposed sources and uses of funds and undergo a subsidy-layering review to ensure that no more federal assistance is being provided than is necessary to make the development financially feasible. If tax credits are awarded for the development, the housing authority must then recruit investors who are willing to provide equity in exchange for tax credits. The process of obtaining tax credits can add several months to a project. Even though leveraging is more complex than relying on grant funds alone, according to HUD’s data, the majority of the 81 HOPE VI sites funded to date are planning to mix public and private financing, primarily by combining low-income housing tax credits or loans from private lenders with the HOPE VI grants. At 30 of the sites, the combined funds from public and/or private sources exceed the HOPE VI grant. Most of the 23 sites selected in the 1997 funding round reflect the program’s new emphasis on forging partnerships and leveraging outside resources. The use of leveraged financing may enable sites to stretch limited federal dollars, create opportunities for mixed-income developments, and attract nonprofit and for-profit partners with experience in leveraged financing. For example, at Centennial Place in Atlanta, the housing authority combined low-income housing tax credits and private funding with the HOPE VI grant to create a mixed-income community. The grant funds provided public housing units for residents with very low incomes, the tax credits financed units for residents with low to moderate incomes, and the private funding paid for the development of market-rate units for residents with moderate to high incomes. At Kennedy Brothers Memorial in El Paso, the housing authority is combining private funds with the HOPE VI grant to produce both public housing units and identical or similar units that will be available to home buyers. At Ellen Wilson in Washington, D.C., the income mix will be completed by offering, at full market prices, fee simple units that are identical or similar to the neighboring limited-equity cooperative units. These units will be purchased using private mortgages, and the profits from their sale will establish an endowment for ongoing neighborhood community and support services. However, other sites, such as Arverne and Edgemere in New York and Robert Taylor Homes B in Chicago, are physically isolated or suffer from extreme economic distress, making them unattractive to outside investors. (Fig. 8 identifies the sources of funding for the 10 sites we visited.) Recent appropriations acts have encouraged leveraging by reducing the size of the HOPE VI grants. On average, the size of the grants has declined from about $45 million during fiscal years 1993 and 1994 to about $21.6 million in fiscal year 1997 (see fig. 9). To some extent, this decline is consistent with a change in the 1996 appropriations legislation that eased the program’s eligibility requirements. As a result of this change, some of the more recently selected sites are smaller and have greater potential for leveraging than the original sites. HUD anticipates further increases in the use of leveraging after it implements a new total development cost (TDC) policy, expected to go into effect with the fiscal year 1998 grants. Under HUD’s former TDC policy, the per-unit costs of development were capped. These costs—including the costs of land acquisition, building acquisition or construction, builders’ overhead and profit, and financing—were not to exceed the housing industry’s standards for multifamily properties in a given area. While HOPE VI sites have always been subject to TDC caps, they have typically received exceptions because of the extraordinary demolition, remediation, and other costs involved in urban revitalization. Under the new policy, applicable to developments financed with grants awarded in fiscal year 1998 and beyond, items paid for with HUD funds, including public housing, HOME, and Community Development Block Grant (CDBG) funds, will not be eligible for exceptions. However, items paid for with non-public-housing funds controlled by a locality, state, or private sector, including low-income housing tax credits, will not be subject to the TDC caps. Consequently, HUD officials believe that housing authorities will be forced to rely on non-HOPE VI funds to a greater extent. During the past 2 to 3 years, staffing cuts in headquarters and the field have diminished HUD’s capacity to oversee the HOPE VI program. Although HUD has hired contractors to provide some additional oversight and recently decided to add 11 positions, the new staff will need time to become familiar with the program. In June 1997, HUD issued its 2020 Management Reform Plan, which calls for reorganizing and downsizing the agency. Under this plan, HUD will cut its staff from 10,500 to 9,000 by the year 2000. The HOPE VI program was not exempted from staff cuts. From March 1995 through March 1998, the number of grant managers responsible for overseeing HOPE VI grants dropped from six to two, while the number of HOPE VI grants more than tripled. Similarly, from August 1996 (when the Office of Public Housing Partnerships was established) through March 1998, the number of experts in leveraged financing decreased from five to two. During this period, complex leveraged financing proposals became the norm for HOPE VI sites. In 1997, efforts to streamline HUD’s field structure left few employees in the field with knowledge of HOPE VI issues. In some instances, field offices with HOPE VI responsibilities were closed, and in other instances, key staff moved to other locations or new assignments. For example, the division director and the site manager of the Milwaukee field office, who had worked closely with the Milwaukee housing authority, accepted positions in other field offices. According to officials from the Milwaukee housing authority, their close working relationship with the HUD field office staff contributed to the success of the city’s HOPE VI redevelopment. In 1997, HUD hired outside contractors to help develop management systems and oversee the HOPE VI program. This action responded, in part, to an increasing number of reports issued by HUD’s Inspector General documenting improper expenditures and management deficiencies at individual HOPE VI sites. In 1997, HUD also began hiring private “expediters” to help housing authorities move through the HOPE VI process. But even with these additional resources, program officials have expressed concerns about not having enough staff to develop and implement programs for improving the management of HOPE VI sites. Program officials were also concerned that housing authorities were not as responsive to the consultants as they would have been to HUD staff. In March 1998 testimony before the Subcommittee on VA, HUD, and Independent Agencies, House Committee on Appropriations, we questioned whether HUD has the capacity to properly manage the HOPE VI program. In April 1998, HUD reevaluated its HOPE VI staffing and decided to add new positions. HOPE VI program directors believed the new positions would help them catch up with the growing workload but noted that they had lost expertise through the earlier staff cuts. For example, the Director of Public Housing Partnership programs said that it takes a number of months to train a new professional in the details of underwriting HOPE VI sites. After 5 years, the federal government’s investment in HOPE VI grants is beginning to produce visible results in the form of capital improvements at some sites. These improvements are helping to break down the barriers isolating the HOPE VI sites from neighboring areas. HUD has developed some outcome-based measures for capital improvements, as the Government Performance and Results Act requires, and is collecting and reporting data on rehabilitation, demolition, and new construction at the sites. Although HUD has encouraged grantees to develop performance indicators for community and support services, it has not established such indicators itself. It has, however, hired a contractor to begin collecting the data needed to establish a baseline for charting the incremental results of these services across sites. HUD could use the data to develop consistent national, outcome-based measures for community and support services at HOPE VI sites. Such measures are important to comply with the Government Performance and Results Act and to ensure that federal expenditures are producing the intended results. As the HOPE VI program has evolved, its focus has shifted from revitalizing the most severely distressed public housing sites to transforming distressed sites with the capacity to leverage outside resources into mixed-income communities. This shift has led to positive results at sites in economically viable locations, such as Centennial Place in Atlanta and Ellen Wilson in Washington, D.C. However, some severely distressed properties in severely distressed neighborhoods, such as Robert Taylor Homes B in Chicago and Arverne and Edgemere in New York City, may not be able to attract investment partners or leverage the funds needed to transform neighborhoods. Thus, the current HOPE VI funding model may not be adequate to revitalize some of the nation’s most severely distressed sites. We recommend that the Secretary of Housing and Urban Development use the baseline data that the Department collects to develop consistent national, outcome-based measures for community and support services at HOPE VI sites. We provided copies of a draft of this report to HUD for its review and comment. HUD commented that the report was fair and objective but expressed some concern with our characterization of the more recently selected sites as less severely distressed than the sites selected in the program’s early years. We agree that the more recently selected sites are suffering from structural and social distress and are likely to be among the most distressed sites in the cities that received the recent grants. But unlike some of the early sites, whose location in isolated or severely economically distressed neighborhoods has prevented them from finding leveraging partners, the sites chosen since 1996 have typically been smaller and located in areas where private interests have been more willing to invest funds. We revised our report to clarify this point. HUD also expressed some concern with our recommendation, stating that because the different sites are expected to tailor their plans to address the specific needs of their communities and residents, it may not be possible to establish HOPE VI-wide measures that would be applicable to all programs. We agree that the HOPE VI sites are unique and that the program should not be constrained in ways that would inhibit creativity. Yet even though the needs of the communities and residents may vary by site, the types of community and support service programs offered at the sites we visited (e.g., day care, after-school care, equivalency degree, job training, and job placement programs) were consistent enough to allow the collection of national data on the outcomes of these programs. Accordingly, we have retained our recommendation to this effect. HUD’s complete written comments and our responses appear in appendix IV. We will send copies of this report to the appropriate Senate and House committees; the Secretary of HUD; and the Director, Office of Management and Budget. We will make copies available to others upon request. We conducted our work from August 1997 through June 1998 in accordance with generally accepted government auditing standards. Major contributors to this report include Gwenetta Blackwell, Linda Choy, Elizabeth Eisenstadt, Andy Finkel, Rich LaMore, Karin Lennon, and Paul Schmidt. The House Report (105-175) and the Senate Report (105-53) accompanying the fiscal year 1998 appropriations act for the departments of Veterans Affairs and Housing and Urban Development, and independent agencies (P.L. 105-65) included requests for GAO to study the status of the HOPE VI program to determine why grantees had not acted more expeditiously. As requested, we reviewed (1) the progress in completing capital improvements and implementing community and support services at HOPE VI sites, (2) the primary reasons why progress at some HOPE VI sites has been slow, (3) the extent to which leveraging is planned to be used at HOPE VI sites, and (4) HUD’s capacity to oversee the program. To assist us in responding to these objectives, we used data developed by HUD and HUD’s contractor on the 81 sites that had received awards through 1997, and we visited 10 sites in eight cities. We selected these sites because they were geographically diverse, had received grants during different fiscal years, and were at various stages of progress, especially in those cities that had received grants for two separate developments. To respond to the first objective, we reviewed the information developed by HUD’s contractor on the 81 sites. The data included each grantee’s current expenditures for capital improvements; the number of units demolished, rehabilitated, or scheduled for demolition or other revitalization activities; and the grantee’s community and support services. Furthermore, during our site visits, we gathered additional information on capital improvements and community and support services. In several locations, we observed specific community and support service programs that were in operation and obtained information on any results to date. To assess why progress has been slow at many HOPE VI sites, we evaluated the impact of the legislative and administrative changes that have occurred since the program’s inception. Because appropriations legislation included changes to the program nearly every year, we assessed the impact of these changes by discussing them with HUD and housing authority officials, as well as by reviewing the notices of funding availability that HUD prepared. These notices generally reflected the legislative changes. In addition, at sites we visited where significant delays had occurred, we reviewed HUD’s files and discussed the delays with HUD headquarters and field officials. We also met with housing authority officials, reviewed their program files, and obtained their views on how soon they expected revitalization efforts to be completed. Finally, we met with representatives of tenant organizations at some of the sites to obtain their views on what factors contributed to the delays, what has been done to overcome the delays, and how they think HUD and housing authority officials have addressed their concerns. To assess the extent to which leveraging had been used or is planned to be used, we reviewed the data collected by HUD’s consultants on each grantee. We also obtained information on leveraging by speaking with housing authority officials during out site visits and by reviewing individual sites’ revitalization plans. To assess HUD’s capacity to oversee the program, we interviewed HUD officials at headquarters and at the field offices near our selected sites, and we reviewed HUD’s program guidelines, project files, and status reports. In addition, we reviewed the correspondence and required quarterly reports sent by the housing authorities at our selected sites to HUD. We also assessed HUD’s program staffing history, current staffing outlook, and use of a contractor hired in 1997 to develop management systems for overseeing the program. Finally, we considered what impact HUD’s 2020 Management Reform Plan may have on the HOPE VI program. We conducted our work from August 1997 through June 1998 in accordance with generally accepted government auditing standards. As figure I.1 shows, the Atlanta Housing Authority was notified of its implementation award in 1993, and the revitalization of the Techwood/Clark Howell Homes site has moved forward expeditiously since, with extensive demolition and new construction. Centennial Place, a new mixed-income community, was created on the site. Centennial Place is the first mixed-income community being developed with HOPE VI and private funds. The former Techwood/Clark Howell public housing site is being transformed as part of a larger effort, the Olympic Legacy Program, which will revitalize 2,935 units of public housing and build a new school, YMCA, and hotel with private, federal, state, and local funds totaling more than $350 million. The housing portion of Centennial Place will cost about $84 million—$42.6 million from the HOPE VI grant and the remainder from low-income housing tax credits and private, state, and local funding. Consisting of 900 garden apartment and town house rental units, Centennial Place is being leased to residents at three income levels: 40 percent of the households are eligible for public housing, 20 percent qualify for low-income housing tax credit support; and 40 percent pay market rates. The prime location of Centennial Place is a key ingredient in its success. The Atlanta Housing Authority and its private-sector partner, the Integral Partnership of Atlanta (a joint venture of The Integral Group and McCormack Baron), have marketed the location’s proximity to two major universities (Georgia Tech and Georgia State), the headquarters for Coca Cola, the downtown area, and Interstates 85 and 75. Local support has also benefited Centennial Place. Relationships with the city of Atlanta, Fulton County, the United Way, the YMCA, the Atlanta Public Schools, the Department of Family and Children’s Services, Georgia Tech, Georgia State University, and other state and local agencies, businesses, and academic institutions will, according to housing authority officials, facilitate leveraging. Careful efforts to relocate all public housing residents through a choice-based relocation program have forestalled residents’ opposition to the redevelopment of Techwood. The Atlanta Housing Authority has just received an award from the National Association of Housing Redevelopment Organizations for its relocation program. The relocation staff usually meet several times with the families affected by the relocation plan—first in a large group, then with a few (e.g., five) families, and finally with just one family. Two-thirds of the former Techwood residents chose Section 8 certificates, and 95 percent of these families found apartments. A former resident sued the housing authority, claiming that it changed the reoccupancy rules after Centennial Place was completed. The case has been settled, and the resident is moving into Centennial Place. Now that the development’s structures have been revitalized, the housing authority has shifted its focus to jobs, job training, and education. Social services and case management (i.e., identifying internal and external resources and making referrals to service providers) will be provided to the public housing residents in a YMCA located on the property at Centennial Place, as well as in the historic community center renovated with HOPE VI funds. Under the HOPE VI program, residents will be able to participate in a variety of activities that require training in special skills, necessitating the establishment of a job and skills bank and various support service programs. The educational, training, and self-improvement programs that were designed under the HOPE VI program are aimed at helping the residents realize their personal goals. Centennial Place is being developed in five phases. Phases I and II are complete, and all units have been leased or have residents approved for leasing. Phase III is scheduled for completion in December 1998. Phase IV is scheduled for completion by the year 2000, and Phase V is scheduled for completion early in 2001. Of the 144 units reserved for public housing residents, 96 are currently occupied by former Techwood/Clark Howell residents. As figure II.3 shows, Mission Main was notified of its grant award of about $50 million in November 1993. However, the site’s revitalization has been at a virtual standstill since then. Built in 1940, Mission Main originally comprised 1,023 units, but conversions and reprogramming for nonresidential use reduced that number to 822. According to one evaluation, the site is badly deteriorated, and the physical layout of the site and buildings has facilitated criminal behavior. According to 1995 information from the Boston Housing Authority, 74 percent of the applicants for public housing who were offered units in Mission Main rejected those units because of the development’s poor physical condition and high crime rates.Approximately 500 units were occupied in December 1995, when the housing authority applied to HUD for permission to demolish the existing units. Selected in fiscal year 1993 to receive one of the first HOPE VI implementation grants, Mission Main was the type of distressed site that, according to a HUD official, the Congress wanted to revitalize when it established the HOPE VI program. HUD selected it not only because its distress was well documented but also because its solutions were well thought out. HUD believed that its proximity to one of Boston’s major medical communities, several colleges, and important cultural institutions would facilitate its integration with the community. The original revitalization plan called for rehabilitating the existing units, but the revised plan proposed to demolish and replace them with newly constructed town house units. The revised plan is expected to cost over $100 million. Primary funding sources include about $100 million in federal funds ($39 million from the HOPE VI grant, $40 million in equity generated through the sale of low-income housing tax credits, and $21 million from HUD’s Comprehensive Grant Program) and about $6 million in local funds for infrastructure work. Making Mission Main safe is the first of the development’s six revitalization goals. The remaining goals include making the housing sound and attractive, improving the housing authority’s responsiveness, rewarding personal responsibility, integrating the development into the neighborhood, and reinforcing the community. The Boston Housing Authority has planned a community and support service program for both Mission Main and Orchard Park, another HOPE VI project located less than 2 miles from Mission Main. The goal of this program is to integrate the developments’ residents into the surrounding area’s mainstream service network. According to the housing authority’s plan, HOPE VI funds will be used to fund gaps in services, not to duplicate existing services. The program will address the long-standing issues of poverty, joblessness, and isolation affecting Mission Main’s residents. The Boston Housing Authority has not entered into any contracts for community and support services. It has begun to identify partners in the community and is planning to hire an independent contractor to measure the effectiveness of its plan for community and support services. Several factors have slowed Mission Main’s progress, including changes in the development’s plans and management and opposition from tenants. After the site’s original HOPE VI director resigned in March 1995, the new director, hired in May 1995, began to recognize, with other city officials, that the plan for rehabilitation would have little impact on revitalizing a significantly distressed property. Changing the plan to include demolition and tax credit leveraging with private developers took several months. Opposition from residents distrustful of the housing authority and of these changes also slowed activity at the site. According to housing authority officials, the residents considered the changes too dramatic and believed they were occurring too fast. The residents feared that the housing authority would demolish their homes and not allow them to return after the renovation. When the Boston Housing Authority hired a developer in May 1996, the residents sided with the developer against the authority. According to a housing authority official, the developer told the residents that they would be equal partners and that the housing authority would have no role. As a result of these divisions, the project was stalled. In April 1997, HUD issued a default letter to the housing authority, threatening to remove the grant if the parties did not move forward. HUD issued the letter, in part, because the housing authority had failed to resolve the impasse with the developer and submit a mixed-income proposal in accordance with its revised revitalization plan. HUD has appointed an expediter to provide expert advice to both the housing authority and the developer, as well as to keep HUD apprised of Mission Main’s progress and any problems. Although the original developer agreed to step aside when the housing authority requested permission from HUD to become the developer, the situation has since changed. The original developer, according to a HUD official, is expected to assume responsibility for the development once its implementation plan is approved. HUD’s contract auditors recently reviewed the housing authority’s and the developer’s expenditures to ensure their legitimacy. Financial differences between the housing authority and the developer have been resolved, HUD has approved funding to pay the outstanding bills, and the development is expected to go forward. HUD, however, is still questioning some of the housing authority’s prior expenditures, especially about $738,000 for support service contracts that were used to provide services for persons who were not Mission Main residents. HUD has asked the housing authority to install the proper controls in its accounting system. According to a housing authority official, only a portion of the amount is in question—the portion that was used to provide services to tenants in a development associated with Mission Main. This official also noted that HUD is currently awaiting a revised implementation plan from the development team. As figure II.5 shows, Pico Aliso was awarded an implementation grant in the fall of 1995, but the revitalization effort did not move forward for some time, primarily because the housing authority decided to revise the revitalization plan after the Congress suspended the one-for-one replacement rule. In July 1997, the housing authority submitted a plan for accelerating the development, and the revitalization effort has since proceeded expeditiously. New construction began in March 1998. Together, the twin housing developments of Pico Gardens and Aliso Village form the largest public housing complex west of the Mississippi. All 577 units at Pico Aliso, built in the 1940s and 1950s, will be demolished, and 421 new units will take their place. These will include 280 rental units and 7 units for sale at Pico Gardens and 74 detached courtyard homes and 60 apartments for senior citizens in Aliso Extension. New administrative offices and a child care center will also be built. According to statistics compiled by the Los Angeles Police Department, the crime rates at Pico Aliso are among the city’s highest, in large part because at least seven gangs operate in the area. The site was redesigned with safety and security in mind. Flat roofs, which gang members had used as shooting platforms, were eliminated, as were blind entryways. Parking lots and open areas that had previously led to turf wars were also reconfigured. New units have been designed with private backyards and individual entrances. Community and support service programs are being designed to foster a cooperative and nurturing spirit among residents. The housing authority is working with the city of Los Angeles to prepare an economic development plan. The authority also solicited the participation of the city’s community redevelopment agency because the Pico Aliso site is adjacent to a proposed redevelopment area. In addition, the site is located within the East Side Economic and Employment Incentive Zone, which offers several incentives to businesses. Finally, the authority is in partnership with the Los Angeles Community Development Bank. Several entities are working with the housing authority to provide job opportunities for Pico Aliso residents, such as Homeboy Industries, Jobs for the Future, the East Los Angeles Skill Center, and the Los Angeles Conservation Corps. In addition, two labor unions—the Laborers International Union of America and the United Brotherhood of Painters—have established a $500,000 grant to place 22 residents in a 24-month apprenticeship demonstration program. Community services provided by the HOPE VI grant include youth apprenticeship, public safety, and economic development programs; support services include job training, gang prevention, after-school tutoring, and a primary health clinic. In addition, the city of Los Angeles has made $2 million in federal Community Development Block Grant (CDBG) funds available for a multipurpose recreational center at the site. The Los Angeles Department of Recreation and Parks will maintain the facility, and other staff for the center will be funded by a nonprofit organization, the Aliso Pico Business Community, Incorporated. Overall, the city has provided $494,730, or about 17 percent of the site’s funding, for support services. About 250 households opted for Section 8 housing during construction, and 176 have the option to return to the site. An additional 30 purchased homes. The major cause of delay in starting construction at the Pico Aliso site was the housing authority’s decision to revise the site’s revitalization plan. The housing authority incurred the delay to take advantage of the Congress’s July 1995 suspension of the one-for-one replacement rule by reducing the site’s density. The revision added 15 months to the planning process—3 months to redesign the architecture, 9 months to revise the plan with the community, and 3 months to respond to HUD’s requests for changes and to obtain HUD’s approval. The housing authority also took time to respond to residents who objected to relocation plans proposing to place them outside the complex. Because 99 percent of the development’s units are occupied, there is little room to move residents within the complex. Gang territories further complicated the relocation process. The revitalization plan states that there are at least 3 major gang “turfs” within the development and at least 15 others in the neighborhood. With the assistance of the League of Women Voters, the housing authority held an election in which 95 percent of the households in the development voted. A new resident advisory committee was elected and supported the housing authority. In total, it took about 3 years for the housing authority to obtain the residents’ trust and persuade the residents to relocate. Demolition has been completed at Pico Gardens and Aliso Extension. Construction began for 148 units at Pico Gardens on March 3, 1998, and for 42 units at Aliso Extension in May. As figure II.7 shows, the Housing Authority of the City of Milwaukee was awarded a HOPE VI demonstration grant for Hillside Terrace in 1993, and the development is near completion. Hillside Terrace is the highest-density public housing development in Milwaukee. It consists of 540 family units, including two- and three-story walkups and row houses, built on about 24 acres in 1948 and 1950. On the one hand, the site is surrounded by a light industrial and commercial district and is isolated from other immediate residential areas; on the other hand, it is located just a few blocks from downtown Milwaukee. The vacancy rates at Hillside Terrace, ranging from 6 to 10 percent, were higher than at any of the housing authority’s other sites. Streets within the development terminated at its boundaries, isolating it from the surrounding area. Without through streets, residents had limited access to buildings; emergency responders, such as firefighters and police, were delayed; curbside garbage collection was nearly impossible; and quiet areas sheltered drug activity. Although the buildings were structurally sound, some of their boilers and heating systems were wearing out, parking facilities and public lighting were inadequate, and the property was severely eroded. The physical conditions discouraged outdoor play or family activities. The housing authority received a $40 million HOPE VI implementation grant in fiscal year 1993, plus two subsequent amendment grants in fiscal year 1995 totaling an additional $5.7 million to revitalize 496 of the 540 units at Hillside Terrace. The goals for the HOPE VI development were to (1) enhance the marketability of the family units by reducing their density, (2) reduce the physical isolation of the site by creating through streets, and (3) encourage economic self-sufficiency among the residents. To accomplish these goals, the housing authority planned to demolish 119 dwelling units in 15 buildings, thereby making way for through streets. The units were to be replaced by 79 units at scattered sites outside the development and 39 Section 8 certificates. The 377 remaining units were to be substantially rehabilitated and modernized. The housing authority also planned to expand community and support services, aiming primarily to help residents become permanently self-sufficient. Because the capital improvements at Hillside Terrace consisted primarily of rehabilitation, few residents were displaced, and because the rehabilitation was largely funded through the HOPE VI grant, the financing was straightforward. Without the complications associated with permanent relocation and leveraging, the HOPE VI development moved forward on schedule. Some delays took place during demolition, however, because of underground oil tanks, unmarked utility lines, and an undocumented brewery tunnel, and another delay occurred when a contractor filed bankruptcy. Housing authority officials ascribed the success of the development to several factors, including the low rate of turnover in the authority’s management staff; the experience of staff at the housing authority and the HUD field office; good working relationships with the residents, the HUD field office, the city, and state agencies; and a good state economy. In addition, the public, the media, the mayor and alderwoman, and the resident council have generally supported the improvements at Hillside Terrace. There have been no lawsuits or organized opposition to the development. The majority of Hillside Terrace’s HOPE VI grant has been disbursed, and the development has progressed on schedule. The site’s physical improvements are scheduled to be completed by July 1, 1998. The two through streets, whose construction required the demolition of 15 buildings, is complete. As a result, the site is no longer physically isolated, and it has more green space, playscapes, and parking. Street lights and walkways were installed to match those of other residential areas in the city. According to residents, gang-related drug traffic and crime have significantly decreased. The three-story walk-ups were rehabilitated to include rear stairwells and individual entrances, creating defensible space. The interiors of all units were substantially upgraded. The existing community center was expanded to house current and future support service agencies, a day care center, and a clinic. Relocation was completed while rehabilitation was in progress, and, according to the housing authority, most residents that chose to return and met screening requirements have moved back, now that the work is almost complete. The demolished units were replaced with new units completed and under construction on scattered sites, and the remaining units were replaced with Section 8 certificates or vouchers. According to a housing authority report, two Hillside Terrace residents have purchased new replacement units, and another is working on financing options. According to the housing authority, 336 of 355 rehabilitated units are occupied and 22 units are currently being rehabilitated. The residents are required to sign an addendum to their lease in which they agree to undergo an employability assessment, participate in the resident council and block watch program, and volunteer 4 hours a month by, for example, cleaning up litter at the site. Community and support services are ongoing and include a neighborhood mentoring program, a scholarship fund, job training programs, a resident-owned business program, day care services, preventive health care services, and educational extension programs available through the local university. The housing authority has also provided training and employment opportunities to residents as construction inspectors. Housing authority officials do not believe that they will spend all of the funds budgeted for community and support services because many of these programs existed before the HOPE VI grant was awarded and are self-sustaining programs. According to an April 1998 HUD report, the housing authority had spent about 30 percent of the $4.3 million budgeted for community and support services and management improvements. The housing authority has collected and reported demographic information at Hillside Terrace, such as changes in residents’ incomes and crime statistics. For example, the authority reported that the percentage of families living below the poverty level dropped from 83 percent in 1993 to 63 percent in 1997, and the number of working families increased from 17 percent in 1993 to 55 percent in 1997. Housing authority officials also plan to collect data on the numbers of residents obtaining services such as day care and health care. During the past 3 years, the housing authority has applied for HOPE VI grants for two additional developments, but these applications were not selected. As figure II.9 shows, Ellen Wilson Dwellings was awarded its implementation grant in late 1993. Although the grant took some time to implement, construction at the development is progressing well. Built in 1941, Ellen Wilson Dwellings had 134 units, which were demolished in 1996. The deteriorated site had been vacant since 1988 when, according to a District of Columbia Housing Authority official, the authority first planned a significant rehabilitation effort. However, the authority found that the costs of the proposed rehabilitation exceeded the available HUD modernization funds, and continuous changes in the housing authority’s leadership precluded further progress. Because Ellen Wilson is located within the boundaries of the Capitol Hill historic district, a community group concerned about developing the site formed a community development corporation in 1991 called the Ellen Wilson Neighborhood Redevelopment Corporation. The community development corporation’s goal was to redevelop the location, although not necessarily as a public housing site. The corporation formed a partnership with two companies experienced in community development to develop a proposal for revitalizing the area, possibly with the use of Section 8 funding. However, the HOPE VI legislation created a more viable source of funding for a comprehensive revitalization effort. After the HOPE VI Urban Revitalization Demonstration Program was established in October 1992, the housing authority selected the community development corporation’s developer in a competitive process to develop a plan for mixed-income housing on the Ellen Wilson site. Although Ellen Wilson received a $15.7 million implementation grant in fiscal year 1993, the first year in which grants were awarded, HUD later made the award contingent on the appointment of an alternative administrator. According to a housing authority official, even though the firm to serve as alternative administrator was identified in the HOPE VI grant application, the appointment did not occur until March 1995. In June 1995, HUD amended the award, providing an additional $9.4 million to cover increased costs, including those for infrastructure and environmental remediation. The additional funding also allowed the development to be set up as a cooperative—an arrangement under which the development will not receive any operating subsidies from HUD. According to a housing authority official, the community development corporation and the housing authority are beginning to establish a community and support service program at Ellen Wilson. A step-up apprenticeship construction program has been established for public housing residents in the Ellen Wilson neighborhood to work on construction of the Ellen Wilson site. Furthermore, a modified self-sufficiency program is being established to help individuals, especially former Ellen Wilson residents, develop a dependable source of income so that they can qualify to reside in the revised Ellen Wilson development. Moreover, the housing authority and the community development corporation have been working for a year to identify and develop contacts with all social service providers and support groups, such as churches and nonprofit organizations, in the Ellen Wilson neighborhood. They are also developing a health care compact under which the residents of Ellen Wilson and surrounding public housing developments would all be served by one health maintenance organization. According to a housing authority official, future funding for community and support services will come from an endowment that will be generated from the expected market-rate sales of 20 homes at Ellen Wilson built with a market-rate loan of HOPE VI funds. When these homes are sold, the construction loans will be repaid. The profit from the sales and the repayment of the construction loan will be invested to establish an endowment to fund ongoing community and support services. Several factors have slowed revitalization at Ellen Wilson, including problems with the housing authority’s management, the neighborhood’s opposition to a public housing development, environmental issues, and delays in obtaining HUD’s approval of the cooperative arrangement and development costs. As noted, HUD responded to the site’s management problems by making the initial grant agreement contingent on the appointment of an alternative administrator. Subsequently, a Superior Court judge appointed a receiver for the housing authority, and a private firm was designated by HUD and the housing authority to administer the grant. Once these management issues were resolved, the project started moving. Satisfying the concerns of the neighborhood’s residents also took time. According to a housing authority official, the residents who did not want any public housing built were very vigilant about the proposed development. These residents raised questions and concerns with the zoning board, which took time to clear. In addition, the housing authority had to replace contaminated soil at the site and install a holding tank to handle runoff from rain. According to a HUD official, the proposed cooperative arrangement and total development costs took time to approve. The proposal to establish a cooperative was the first of its kind at a HOPE VI site. Unlike other sites, Ellen Wilson was not requesting any future operating subsidies from HUD and was not receiving any low-income housing tax credit funding, and each resident would have an equity interest in the development. According to a HUD official, the total development costs were higher at Ellen Wilson than at most other HOPE VI sites. A HUD official also noted that the need to conform to Capitol Hill’s architectural guidelines—which require such amenities as exterior staircases, tile and brick fronts, and elevated front yards—and to resolve environmental problems contributed to these high costs. Development at Ellen Wilson is currently on schedule. The first units are to be available in September 1998, and the development is to be completed by the summer of 1999. When completed, it will have 134 limited-equity cooperative units and 20 units available for sale at prevailing markets rates. All residents will be considered owners, including those in the 67 units that will be set aside for households earning 50 percent or less of the area’s median income. The down payment for each household will be based on 5 percent of its annual income at the middle of its income band, subject to market conditions (e.g., 5 percent of the middle of the band covering 0 to 25 percent of the area’s median income). A community development corporation official noted that a person earning $6 per hour could qualify for a unit in the lowest income range. As figure II.11 shows, the Chicago Housing Authority was awarded a $50 million HOPE VI implementation grant in fiscal year 1994, but HUD did not approve a revised revitalization plan, which stemmed from management changes in 1995, until September 1997. Cabrini Homes Extension is the largest of three developments that make up Cabrini-Green, known in Chicago and nationwide as one of the country’s most distressed public housing sites. It is located on Chicago’s near north side, adjacent to a high-rent neighborhood and theater district that is undergoing a boom in the construction of new single-family homes, row houses, condominiums, and town houses. In addition, some of Chicago’s most desirable real estate, located on Michigan Avenue and commonly known as the Magnificent Mile, is just a few blocks away. Cabrini-Green is a 70-acre site with 3,606 family units in 86 residential buildings belonging to three separate developments—Frances Cabrini Homes (55 row house buildings), Cabrini Homes Extension (23 high-rise buildings) and William Green Homes (8 high-rise buildings). Only the row houses meet HUD’s minimum housing quality standards. Cabrini Homes Extension, built in 1958, consisted of 1,921 units with 3,695 residents as of 1993. The 36-acre site also included a management office, a central heating plant, and a community center. About 32 percent of the units at Cabrini Homes Extension were occupied. According to the housing authority’s reports, the property is severely distressed, the buildings’ design is defective, and the buildings’ systems are deficient and deteriorated. Because the site’s design included no through streets, the streets create a maze of dead ends conducive to criminal activity. Stairwells also shelter drug deals and physical assaults. According to the housing authority, the available resources are not adequate to meet the site’s extensive capital and modernization needs. HUD has listed the Chicago Housing Authority as troubled since 1979. According to the housing authority itself, it was plagued by mismanagement and negative opinion, held by the public and residents alike. The housing authority’s board resigned in May 1995, and HUD assumed control. The authority is run by a former HUD assistant secretary and a five-member executive advisory committee appointed by HUD. The housing authority received a $50 million HOPE VI implementation grant in fiscal year 1994 for Cabrini Homes Extension. The HOPE VI funds, along with $19 million in public housing development funds, are to construct or acquire 493 replacement units for families that are eligible for public housing and to demolish eight distressed high-rise buildings containing 1,324 deteriorated units at Cabrini Homes Extension. The public housing replacement units, representing about 30 percent of the planned new units, are to be interspersed with market-rate units. Of the remaining units, 20 percent are to be reserved for moderate-income families and 50 percent for households paying market rates. The housing authority plans to acquire approximately 250 replacement units on four new development sites. Over $8 million of the site’s HOPE VI grant is designated for community and support services. The services are designed to promote self-sufficiency and economic independence. The services range from education, to substance abuse intervention, to a variety of economic development initiatives. For example, Cabrini Textiles is a silk screening company training residents and providing work experience in a productive environment. The housing authority has used the HOPE VI funds to leverage resources from the city and the private sector. The HOPE VI development at Cabrini Homes Extension has served as a catalyst for the city’s Near North Side Neighborhood Revitalization Initiative, which represents a total estimated commitment of $315 million in public and private funds to transform Cabrini-Green and the surrounding community. The initiative will include the construction of 2,000 new mixed-income housing units (row houses, duplexes, and mid-rise buildings), a new town center, a commercial district with a grocery store and shopping facilities, a district police station, new schools, a library, and a community center. Management turnover at the Chicago Housing Authority and changes to the HOPE VI program led to delays in developing the site’s revitalization plan. After HUD rejected an application for a HOPE VI grant for Cabrini Homes Extension in fiscal year 1993, it funded an application the next year, in accordance with a requirement in the appropriations act that it fund without further competition housing authorities that applied in fiscal year 1993. According to the housing authority’s executive director, a significantly flawed proposal was funded and set up to fail. The director at that time pursued the proposal, submitted a revitalization plan to HUD in March 1995, and resigned 2 months later. Then, as noted, the housing authority’s board resigned, and HUD took over the authority’s management, changing and expanding the scope of the original plan for the site. After taking time to reorganize and try to restore relationships with the community, the new leadership submitted the revised revitalization plan to HUD in June 1997, and HUD approved the plan in September. Residents’ concerns and legal actions have also contributed to delays in the site’s development. Both HUD and housing authority officials told us that because promises made to residents by the housing authority’s former management have not been kept and because residents view the revised revitalization proposal as a land grab by the housing authority, the city, and the developers, the residents do not trust the responsible parties. For example, the housing authority’s former chairman promised residents that no relocation and no demolition would take place at two of the buildings until replacement housing had been built on land currently belonging to the housing authority. However, under the revised plan, additional buildings are to be demolished and residents are to be relocated to surrounding neighborhoods. As a result, in October 1996, the local advisory council at Cabrini-Green entered a lawsuit against the housing authority and the city. First, the council claimed that residents had not been adequately consulted on the development of the new plan, which increased the number of units to be demolished; second, it asked that relocation be halted in accordance with commitments made by the housing authority’s former chairman; and third, it asked that demolition be halted. According to the housing authority, the court has ruled that relocation may proceed because the existing buildings are in such poor condition, and a trial is scheduled for June 1998. In addition, the housing authority spent several months obtaining approval from a federal judge to acquire approximately 250 replacement units at other sites. Finally, because of their complexity, the development proposals and land transfer agreements have taken time, both for the stakeholders to submit and develop and for HUD to review and approve. For example, the housing authority is finalizing land transfer agreements with the Chicago Board of Education and the Chicago Park District. It has procured surveys and appraisals and has submitted a disposition application to HUD for approval. According to the housing authority, because of their complexity and ambitiousness, the HOPE VI development and the Near North Side Neighborhood Revitalization Initiative will take a long time to implement. Four buildings at Cabrini Homes Extension, formerly containing 398 units, have been demolished, and three additional buildings with 327 units have been vacated. HUD has approved the housing authority’s application to demolish two of the vacated buildings and is reviewing the other application. The housing authority has relocated 230 families. Private developments are under construction and some units are completed on adjacent property. The housing authority has purchased two town house units at one of the developments and has relocated two Cabrini families in this replacement housing. The housing authority has also finished screening Cabrini families eligible to occupy 16 units at another development. Community and support service programs are ongoing, and the housing authority is tracking training and employment statistics for residents. For example, the authority reported that, as of October 1997, over 250 Cabrini residents had been placed in jobs through its programs. The housing authority has continued to negotiate with the local advisory council. As figure II.13 shows, Orchard Park received notification in 1995 of its implementation award. The revitalization effort has since moved expeditiously, with extensive demolition and new construction under way. Built in 1942, Orchard Park originally contained 711 units. Its public housing structure kept its residents physically, socially, and economically isolated, effectively preventing them from moving out of the area, and discouraging businesses, investors, and service providers from moving in. When planning for the site’s revitalization began, 36 percent of the development’s units were vacant, and 86 percent of the applicants for public housing in Boston were rejecting Orchard Park because of its poor physical condition and reputation for severe crime. HUD awarded the Boston Housing Authority a planning grant for Orchard Park in May 1995. Four months later, the housing authority received an implementation grant based on the feasibility, sustainability, and probability of the site’s advancing steadily through all of the planned phases. According to a study by a consultant, Orchard Park was selected for its innovative plan to integrate the development with the neighborhood through the use of off-site development, fill vacant lots with privately owned housing, and leverage the HOPE VI grant with low-income housing tax credits. The study also noted that the plan has established a way of doing business that could be applied to other HOPE VI projects. The Orchard Park development is scheduled to take place in five phases. During Phase I, families were temporarily relocated while 126 units were rehabilitated. At the beginning of Phase II, eight buildings, containing 246 units, were demolished, and 90 new duplex and town house units are being built. The first units were to be available for occupancy in June 1998, and the remaining units are expected to be completed by December 1998. Eight more buildings, containing 220 units, will be demolished during Phase III, and up to 130 new town house units are to be constructed, starting in 1999 and finishing by the end of 2000. A public elementary school will also be built as part of this phase. The school will include community space to serve Orchard Park. During Phase IV, up to 140 rental units will be constructed, starting in July 1998, bringing the total number of rental units—whether rehabilitated or newly constructed—to about 486. Finally, during Phase V, up to 50 new homes will be built. Phases IV and V are both off-site, that is, on scattered sites in the immediate vicinity of the development. The off-site construction is expected to be completed by December 2000. The primary sources of funding for these revitalization efforts are as follows: $20.4 million from the HOPE VI grant, about $24 million from the Comprehensive Improvements Assistance Program,$36.7 million from low-income housing tax credits, $9 million from the Comprehensive Grant Program, and $2.2 million in infrastructure work from the city of Boston. The Boston Housing Authority has planned a community and support service program for both Orchard Park and Mission Main, another HOPE VI project located less than 2 miles from Orchard Park. The goal of this program is to integrate the developments’ residents into the surrounding area’s mainstream service network. According to the housing authority’s plan, HOPE VI funds will be used to fund gaps in services, not to duplicate existing services. The program will address the long-standing issues of poverty, joblessness, and isolation affecting the residents of Orchard Park and Mission Main. The Boston Housing Authority has not entered into any contracts for community and support services. It has begun to identify partners in the community and is planning to hire an independent contractor to measure the effectiveness of its plan for community and support services. According to a housing authority official, the housing authority is currently responding to comments that it received from HUD on a 6-month plan for increasing residents’ self-sufficiency. Orchard Park’s success reflects close collaboration from the beginning among the residents, housing authority, developers, and city of Boston. Housing authority staff developed a close working relationship with the residents during Phase I of the development, when, starting in January 1995, 126 units were rehabilitated, primarily with funds from the Comprehensive Improvement Assistance Program. When the housing authority followed up on HUD’s suggestion that the HOPE VI revitalization plan for Orchard Park include demolition and leveraging with private developers, the residents were willing to listen. Housing authority officials believe the goodwill created with the success of the earlier rehabilitation was the reason for the positive working relations. Housing authority staff spent considerable time with the residents and encouraged their comments at each stage of the development. The residents understood, however, that the housing authority had the final say in all matters. Although some tenants used Section 8 certificates to relocate, many moved to vacant units within the Orchard Park complex. Units were available because the Boston Housing Authority had closed the waiting list at the complex well before rehabilitation was to start. All residents, including those who relocated, may return to the site when construction has been completed. Orchard Park’s development is proceeding on schedule. Construction of the 90 town houses in Phase II is well under way. As figure II.15 shows, the revitalization of Kennedy Brothers Memorial Apartments began with a planning grant in 1994 and has since progressed through extensive demolition, rehabilitation, and new construction. The revitalization includes plans to construct homes for sale on newly purchased property adjoining the original site. Kennedy Brothers Memorial was built in 1973 and contained 364 units before the revitalization effort started. When chosen for HOPE VI, the complex had been voted the “worst” of 30 distressed public housing sites in El Paso by public housing residents citywide. Major problems identified at the site were crime, drugs, and gangs. The complex was in a residential neighborhood but was isolated by a stone wall that encircled it and cut off access to neighboring streets. The site was very crowded, and the street patterns and wall attracted drug smugglers entering the country at a Mexican border crossing a quarter of a mile away. The Housing Authority of the City of El Paso received a $500,000 planning grant in November 1993. Its revitalization plan was developed through an inclusionary planning process that brought together residents, neighbors from the surrounding community, service providers, and community businesses. Preliminary architectural studies were performed and a series of planning meetings were held to evaluate options and develop the final physical revitalization plan. The housing authority was selected to receive a $36.2 million implementation grant in 1995, after the Congress directed HUD to award implementation grants to all jurisdictions that had received planning grants in 1993 or 1994. Housing authority officials believe that starting with a planning grant was a key factor in their ability to proceed at a rapid rate. Early agreement on the site plan by all the parties involved, the residents’ trust of housing authority officials, and the desire of residents to qualify for homeownership have also contributed to the successes to date. The parties to the planning process agreed that a reconfiguration of the site, coupled with demolition to reduce density, was needed to improve security and stop gang-related activities. The development’s main thoroughfare was cut into two streets that ended in cul-de-sacs on either side of a central community park to discourage the thoroughfare’s use as an escape route for drug smugglers. The 8- to 10-foot stone wall was also scheduled for demolition so that the development could be integrated with the neighborhood. Fenced backyards were planned for each unit, and a community center was designed as the site’s focal point, providing residents with day care, job and computer literacy training, and economic development programs. According to a 1995 HUD-contracted study, the early success at Kennedy Brothers Memorial was owing, in large part, to the housing authority’s decision to contract with a project manager who focuses exclusively on HOPE VI developments. The excellent rapport that the project manager and the housing authority’s executive director, management director, and site manager developed with the residents at the beginning of the planning process also promoted agreement on key issues. At Kennedy Brothers Memorial, residents’ attitudes appear to have changed with the revitalization. According to housing authority and resident council officials, the residents are taking pride in the revitalized site, keeping it virtually graffiti-free. These officials also acknowledge that HUD’s “one strike and you’re out” policy, in place since 1996, has also deterred vandalism. During our visit, we talked with residents excited about the new training and staff development programs and, in particular, about the possibility of qualifying to purchase one of the 50 homes in the homeownership program. Capital improvements are progressing on schedule. Recently, 240 units were rehabilitated, and many of the former residents have moved into the revitalized units. Contractors are now being selected to construct 124 replacement housing units and to construct the 50 homes included in the development, scheduled for completion in the first half of 1999. The community center is under construction and is scheduled for completion in July 1998. As figure II.17 shows, the Chicago Housing Authority was awarded a $25 million HOPE VI implementation grant in fiscal year 1996 for Robert Taylor Homes B. The housing authority submitted its revitalization plan for the site in January 1998 and is awaiting HUD’s approval. Demolition started in May 1998, and the project’s completion is expected in 2006. Chicago’s State Street Corridor, a 4-mile stretch of public housing, is the nation’s largest, most densely populated public housing enclave, consisting of 8,215 units concentrated in five developments. Robert Taylor Homes is one of the developments; it is divided into two subdevelopments, A and B, which together contain over 4,300 units in 28 detached, 16-story buildings. Robert Taylor Homes B is a mile-long, 74-acre site, built between 1959 and 1963 and consisting of 2,400 units in 16 high-rise buildings. The development contains some of the poorest census tracts in America. The community surrounding Robert Taylor has lost more than half of its population since the development was built, reportedly because former residents were afraid of living near the crime-plagued Robert Taylor Homes and other public housing in the State Street Corridor. The property’s buildings are poorly designed, building systems are severely deteriorated, and major building systems chronically fail. Inadequate security systems and open-air galleries on each floor enhance opportunities for crime. Obsolete heating and electrical systems, weather-damaged elevator equipment, deteriorated hot water tanks, and inadequate sanitary waste lines regularly fail, exposing the housing authority to extraordinary costs to restore and maintain buildings that do not meet minimal standards of habitability. Vacancy rates averaged 33 percent in the buildings. The development is known not only for its concentrated poverty but also for gang-related criminal activities. According to the revitalization plan, before community escorts began to walk children from the development to school, parents kept the children at home for fear that they would be shot in the cross fire of gang warfare. The Chicago Housing Authority received a $25 million implementation grant in fiscal year 1996 for Robert Taylor Homes B. The plan is to vacate and demolish five buildings (761 units) and purchase or build 251 replacement units off-site in the surrounding community. These replacement units are to be dispersed among and indistinguishable from market-rate housing in the community. The housing authority plans to, first, acquire existing units; second, rehabilitate existing units; and third, if necessary, construct new replacement units. The housing authority has also earmarked HOPE VI funds to provide community and support services to residents relocated from Robert Taylor Homes B. Such services include family transition assistance, job and computer training, social services, and other assistance as needed to help integrate the residents with their new surroundings. The HOPE VI grant is the first phase of a long-term revitalization plan that the Chicago Housing Authority is developing in conjunction with the city of Chicago. Ultimately, the housing authority intends to demolish the Robert Taylor high-rises and build a light industrial park on the vacated land. The housing authority plans to apply for future HOPE VI grants and leverage funds from private developers. According to the housing authority’s estimates, it will take 10 years to vacate Robert Taylor and even longer to revitalize the community. Factors Contributing to Success and Delays Progress in revitalizing Robert Taylor Homes B will be limited until HUD approves the housing authority’s plan. Moreover, according to the housing authority’s executive director, implementation will take a long time, given the complexity of the redevelopment plan. The housing authority is appealing a judgment order on its emergency motion to exempt HOPE VI projects from an earlier court ruling limiting its development efforts in census tracts with comparatively high percentages of minority residents.Because the intent of the earlier ruling is to prevent the housing authority from concentrating low-income households in certain census tracts, the housing authority is basing its request for an exemption on the premise that the funds are to be used to redevelop troubled public housing sites and neighborhoods, not to increase the number of low-income households. According to the housing authority, if the appeal is not successful the schedule for redevelopment will be lengthened. Obstacles to relocating large numbers of families from Robert Taylor Homes B could also delay development. For example, the revitalization plan states that the general community may oppose relocation because it perceives that residents of Robert Taylor are likely to be involved in gangs, violence, and drug trafficking. Problems with crossing gang lines could endanger families, especially those with teenage children, and could also present obstacles to relocation. One factor that may facilitate relocation is the housing authority’s work with the local advisory council of residents at Robert Taylor Homes B. For example, the housing authority involved the council in its planning process and gained the council’s support for its revitalization plan. The housing authority has selected the first three of the five buildings that HUD has approved for demolition at Robert Taylor Homes B. According to the revitalization plan, the housing authority is vacating the buildings and relocating the residents in the surrounding community. As of January 1, 1998, approximately 25 percent of the affected families (112 of 522) had been relocated to housing of their choice. The majority of the residents have chosen Section 8 vouchers for relocation rather than moving to another Robert Taylor building. A centrally located community center is being renovated with funds from the housing authority, the city, and local community groups. The center, which is scheduled to open in the summer of 1998, will house community and support services for the residents of Robert Taylor Homes B and other public housing developments in the community. On May 18, 1998, demolition began on the first building. As figure II.19 shows, progress at Arverne/Edgemere Houses has been slow because the HOPE VI implementation grant was originally awarded to another site—Beach 41st Street Houses. All three sites are in Far Rockaway, a peninsula on the southern edge of Queens, south of Jamaica Bay and Kennedy Airport. Some progress is now being made at the new site. The Arverne and Edgemere public housing sites are across the street from each other and less than a mile from the Beach 41st Street site. The developments are about a 1-hour commute from downtown Manhattan. The economically distressed area lacks the mix of neighborhood services and amenities needed for a thriving, vibrant community. In addition, the high density and current configuration of the buildings have contributed to vandalism and other criminal activity. According to a housing authority report, drive-by shootings and drug trafficking have exacerbated older residents’ fears and distrust of the young people, especially the young men, living at the sites. The Beach 41st Street development was completed in 1970 and had 712 units. It was the first site selected by the New York City Housing Authority to receive a HOPE VI grant because it was among the most economically distressed sites in the city. In addition, HUD officials hoped that the HOPE VI effort would push the city to implement a long-standing urban renewal plan to revitalize the Far Rockaway section. The Arverne site, with 418 units, was completed in 1951; the Edgemere site, with 1,395 units, was completed in 1961. Both sites are physically isolated from viable community institutions and resources, such as retail outlets, banks, shopping markets, and churches. Both jobs and public transportation are scarce in the area. The Edgemere site received a $47.7 million HOPE VI implementation grant in December 1996. The funding was transferred from Beach 41st Street Houses after an impasse over the residents’ role in the planning process could not be overcome. The Beach 41st Street residents believed they had veto power over the process. Faced with the possibility of the funds being recaptured, the housing authority requested that HUD allow a transfer of funds to assist with the revitalization of 500 units at Edgemere. The Arverne and Edgemere sites had previously received a $400,000 planning grant in fiscal year 1995. In fiscal year 1996, the two sites received a $20 million implementation grant. Because the housing authority’s revitalization plan considered the Arverne and Edgemere sites as part of the same HOPE VI development, HUD combined the HOPE VI grants for the two sites. In addition to these funds, the revised implementation plan for the two sites assumes about $15.1 million in housing authority funds to cover additional development costs. About $5 million in low-income housing tax credits will also be sought to help construct a separate 120-unit housing complex for the elderly on a nearby parcel of land donated by the city. In addition, the plan assumes that the city will provide about $2.2 million to satisfy the 15-percent matching requirement for support services. The community and support services for Arverne and Edgemere are designed to provide education and social support for residents seeking employment—especially those who are in danger of losing their benefits through welfare reform. The housing authority has identified numerous community and support service providers that it expects to contract with once funds become available. For example, the Youth Policy Institute will help residents develop and implement plans for training, employment, and self-sufficiency. Communities in Schools will coordinate educational programs for residents. Using funds from the HOPE VI grant and citywide programs, the housing authority has also developed a plan for creating resident-owned businesses, intended to make meaningful economic development opportunities available to HOPE VI residents. Elements of the plan include providing access to entrepreneurial training and capacity building, providing access to the housing authority’s citywide contractor training and Make Your Own Business programs, and offering ongoing technical assistance and support for residents who have completed the contractor training or business programs. HUD’s fiscal year 1996 decision to require demolition precipitated a series of events that significantly delayed progress. To satisfy this requirement—that at least one building be removed from a site—the Beach 41st Street architect proposed removing several of the top floors from each of the four 13 story buildings. The number of units removed would have been equal to the number in one whole building. According to a housing authority official, this solution to the demolition requirement would have been more expensive than tearing down the buildings completely. Furthermore, the concept of demolition was opposed by the Beach 41st Street site’s resident council, which was concerned about who would be allowed to come back after the demolition. Resident council members also viewed themselves as the housing authority’s partner and believed that they should have veto power over decisions that were being made. Although HUD, the housing authority, and the residents negotiated for 6 months, they could not reach agreement, and in December 1996, at the request of the housing authority, HUD transferred the HOPE VI funds to Edgemere. The housing authority then included demolition in the plans for Edgemere’s redevelopment, even though New York City’s political officials were against the concept. The housing authority determined that the best way to meet the demolition requirement would be to remove some top floors from each of three nine-story buildings, thereby eliminating about 100 units. Subsequently, the housing authority withdrew this plan and proposed to convert dwelling units on the first floor to create space for commercial and community services. This approach would also have removed about 100 units. The issue became moot when the Congress, in the fiscal year 1998 appropriations act for the departments of Veterans Affairs and Housing and Urban Development and independent agencies, gave the New York City Housing Authority the option of not following any HOPE VI demolition requirements and the housing authority abandoned the plans for demolishing the 100 units. Subsequently, the housing authority proposed removing 32 units from 8 buildings to make room for interior stairwells in order to meet the city’s fire code. In June 1997, the housing authority submitted its revitalization plan to HUD. HUD returned the plan with comments for the housing authority to address. In February 1998, the housing authority submitted a revised plan that HUD expects to approve. Both housing authority and HUD officials believe that it will take at least 18 months to hire an architect and a developer before any rehabilitation work can start. The housing authority can begin to implement the community and support service plan after the revitalization plan receives HUD’s final approval. As of June 1998, HUD had not approved the plan. Number of public housing units 580 (continued) Number of public housing units 93-95 $1,510,541,118 $2,450,696,082 93-96 $1,966,525,120 $3,334,281,292 92 (continued) The following are GAO’s comments on the Department of Housing and Urban Development’s letter dated June 19, 1998. 1. We revised our report accordingly. 2. We revised our report to indicate that 11 positions are being restored. The HUD official quoted in our draft report told us that even if the Department hired highly experienced employees, it would take a number of months to train them in the details of underwriting HOPE VI sites. This official and others we spoke with emphasized that regardless of the experience and competency of the individuals hired, it takes time to learn the policies and procedures involved in structuring public/private financing. We removed the statement in the body of the draft report that gaining such expertise could take a year, but we continue to believe that it will take time. 3. We agree that the HOPE VI sites are unique and that the program should not be constrained in ways that would inhibit creativity. Yet even though the needs of the communities and residents vary by site, the types of community and support service programs offered at the sites we visited (e.g., day care, after-school care, equivalency degree, job training, and job placement programs) were consistent enough to allow the collection of national data on the outcomes of these programs. Accordingly, we have retained our recommendation to this effect. 4. No change required. 5. We revised our report to state that the more recently selected sites are smaller and have greater potential for leveraging than the original sites. We agree that the recently selected sites are suffering from structural and social distress and are likely to be among the most distressed sites in the chosen cities. But unlike some of the early sites, which have not been able to attract leveraging partners, the sites chosen since the criteria for participation in the program were expanded in 1996 typically have been smaller and have been located in areas where private interests are more willing to contribute funding. We revised our report to clarify this point. 6. We were aware of the study conducted by HUD’s Office of Policy Development and Research and summarized the results of its first phase in our February 1997 report. We added information to this report on the results of the first phase of this study and noted that the second phase is expected to begin in the summer of 1998. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the: (1) progress in completing capital improvements and implementing community and support services at HOPE VI sites; (2) primary reasons why progress at some HOPE VI sites has been slow; (3) extent to which financial leveraging is used at HOPE VI sites; and (4) Department of Housing and Urban Development's (HUD) capacity to oversee the program. GAO noted that: (1) progress in completing capital improvements and implementing community and support services varies at HOPE VI sites; (2) overall, the rate of spending on capital improvements is increasing, but the vast majority of the grant funds remain to be disbursed; (3) although housing authorities could spend up to 20 percent of the grant funds awarded in fiscal years 1993 through 1996 for community and support services to help residents find jobs and become self-sufficient, the average expenditure was about 12 percent; (4) to track the progress of capital improvements and community and support services, HUD has established measures of performance for capital improvements and has hired a contractor to collect baseline data on community and support services; (5) at the HOPE VI sites visited, progress in implementing capital improvements and community and support services has varied with structural, social, and management issues specific to each site; (6) legal issues covering the preparation of grant agreements, legislative and administrative changes in unit replacement and demolition policies, and limited HUD staffing have also delayed progress at HOPE VI sites; (7) more complex redevelopment plans have created major opposition among groups of residents at several sites and produced delays; (8) using HOPE VI grants to leverage funding from public and private sources has introduced time-consuming requirements for coordinating the different sources' procedures and schedules; (9) financial leveraging has increased over time, and this trend is expected to continue; (10) a 1998 HUD policy limiting a property's total development costs to industry averages is also expected to encourage leveraging; (11) because HOPE VI developments are more complex and costlier than most multi-family housing developments, the new policy will require the use of leveraging in the future; (12) reorganizing and downsizing have left HUD with fewer resources for overseeing HOPE VI grants; (13) streamlining has also left few employees in the field with knowledge of HOPE VI issues; (14) HUD has hired contractors to provide some additional oversight and has restored 11 positions to the HOPE VI program; and (15) although these additions will offset some of the staffing cuts, the new staff will need time to acquire expertise in the program.
From fiscal years 2008 through 2011, the typical participant in the IL track was a male Vietnam-era veteran. Of the 9,215 veterans who entered the IL track in these years, most (67 percent) were male and 50 years old or older. Most women in the IL track were in their 40s or 50s. Most of the 9,215 IL track veterans served in the Vietnam War; relatively few served in the Global War on Terrorism as part of Operation Enduring Freedom or Operation Iraqi Freedom.veterans served in the U.S. Army, and less than 1 percent served in the National Guard or Reserves. More than three-quarters of IL track veterans had a combined service-connected disability rating of at least 60 In addition, most (60 percent) IL track percent, and 34 percent had a disability rating of 100 percent. Regardless of disability rating level, the most prevalent disabilities among this group were post-traumatic stress disorder (PTSD), tinnitus (“ringing in the ears”), and hearing loss. Furthermore, our review of the case files of 182 randomly selected IL track veterans in fiscal year 2008 shows that they were provided a wide range of goods and services, from individual counseling and the installation of ramps to a boat, camping gear, and computers. The most common type of goods or services were related to counseling, education and training, and computer and camera equipment. For all veterans who entered the IL track in fiscal year 2008, we estimated that VR&E purchased a total of almost $14 million in goods and services. The average spent per IL track case that year was nearly $6,000. We found that most (about 89 percent) of IL track veterans who began only one plan during fiscal year 2008 were classified by VR&E as “rehabilitated”—i.e., successfully reaching and maintaining the goals identified in their IL plan—by the end of fiscal year 2011. At the same time, about 11 percent of cases were either “discontinued”—i.e., closed by VR&E because the rehabilitation goals in the veteran’s IL plan were not completed—or were still active cases. Of the IL cases that had been discontinued, the reasons included the veteran declining benefits, not responding to VA’s attempts to contact them, worsening medical conditions, and death. We also found that some IL plans were easier to close as rehabilitated than others, due to the varied nature and complexity of IL plans, which are based on veterans’ individual disabilities and needs. For example, one IL plan we reviewed for a veteran with rheumatoid arthritis only called for the purchase and installation of eight door levers and a grab rail for the bathtub to facilitate his independence. However, another IL plan we reviewed called for providing a veteran who used a wheel-chair with medical, dental, and vision care as needed, and about $24,000 in modifications to the veteran’s home, including modifying the veteran’s bathroom, widening doors and modifying thresholds, and installing an emergency exit ramp in a bedroom. While the overall IL rehabilitation rate nationwide was 89 percent for veterans who started an IL plan in fiscal year 2008, the rate varied by regional office, from 49 to 100 percent. About two-thirds of regional offices rehabilitated 80 percent or more of their 2008 IL track veterans by the end of fiscal year 2011. In addition, VR&E’s IL rehabilitation rate was higher in regional offices with larger IL caseloads. Among veterans who entered the IL track in fiscal year 2008, an average of 90 percent were rehabilitated at offices with more than 25 IL entrants, compared to an average of 79 percent at offices with 25 or fewer IL entrants. Furthermore, in fiscal year 2008 IL veterans nationwide completed their IL plans in an average of 384 days (about 13 months); however, we found that the length of time to rehabilitate these veterans varied by regional office from a low of 150 days at the St. Paul Regional Office to a high of At most regional offices (49 895 days at the Roanoke Regional Office. of 53), however, the average number of days to complete veterans’ IL plans ranged from 226 to 621 days (8 to 21 months). To control for various factors that could influence rehabilitation time frames, we used a statistical model to estimate the amount of time it would take certain groups of IL track veterans to complete their IL plans.model show differences across regional offices in the amount of time it takes for veterans to become rehabilitated based on caseload. More specifically, the chance of rehabilitation within 2 years was less than 50 percent at 4 offices, between 50 and 90 percent at 18 offices, and 90 The results of our percent or higher at 16 offices. Veterans served by regional offices with large IL caseloads generally had a higher probability of completing an IL plan more quickly than a veteran served by an office with a small IL caseload (see fig 1). We identified four key areas where VR&E’s oversight of the IL track was limited: (1) ensuring compliance with case management requirements, (2) monitoring regional variation in IL track caseload and benefits provided, (3) adequacy of policies and procedures for approving expenditures on goods and services for IL track veterans, and (4) availability of critical program management information. Certain VR&E case management requirements were not being met by some regional offices. For example, based on our review of VR&E’s site visit monitoring reports, we found that some Vocational Rehabilitation Counselors (VRCs) were not fulfilling VR&E’s requirement to meet in- person each month with IL track veterans to monitor progress in completing their IL plans. VRCs told us that this requirement is a challenge due to the size of their caseloads and the distances that they may have to travel to meet with veterans. Furthermore, while VR&E and the Veterans Health Administration (VHA) both have policies that require them to coordinate on the provision of goods and services for IL track veterans, we found that some VRCs experience challenges in doing so. Several VRCs in the regions we interviewed indicated that when they refer IL track cases to VHA physicians, the physicians do not respond or they respond too late. As a result, services for IL track veterans are delayed or purchased by VR&E instead of VHA. In our review of 182 IL track case records, we found some instances where VR&E purchased goods and services that appear to be medically related, such as ramps and grab bars, which could have been provided by VHA. In response, we recommended VA explore options for enhancing coordination to ensure IL track veterans’ needs are met by VHA, when appropriate, in a timely manner. VA concurred and stated that it was piloting an automated referral system that would allow VR&E staff to make referrals to VHA providers and check on their status electronically. VR&E does not systematically monitor variation in IL track caseload size and benefits across its regional offices. We found that the total IL track caseload for fiscal years 2008 through 2011 ranged from over 900 cases in the Montgomery, Alabama Regional Office to 4 cases in the Wilmington, Delaware Regional Office. In addition, we found that some regions developed IL plans that addressed a broad range of needs while others elected to develop more focused plans that provided fewer benefits to achieve VR&E’s rehabilitation goal. VR&E has relied on the information provided through its general quality assurance (QA) activities and a series of periodic ad hoc studies to oversee the administration of the IL track. Because these activities are limited in scope, frequency, and how the information is used, we noted that they may not ensure consistent administration of the IL track across regions. In response, VR&E officials commented that QA results are analyzed to determine trends, and make decisions about training content and frequency. VR&E’s current policy for approving IL track expenditures may not be adequate, considering the broad discretion VR&E provides to regions in determining and purchasing goods and services. While officials told us that VRCs are required to include all cost estimates when they submit veterans’ IL plans to be reviewed and approved by the region’s VR&E Officer, VR&E’s written policy and guidance do not explicitly require this for all IL expenditures. Thus, regional offices have the ability to purchase a broad range of items without any Central Office approval, resulting in some offices purchasing goods and services that may be questionable or costly. (See table 1 for the level of approval required for IL expenditures.) In one case we reviewed, VR&E Central Office approval was not required for the purchase of a boat, motor, trailer, and the boat’s shipping cost, among other items, totaling about $17,500. In another case we reviewed, VR&E Central Office was not required to approve total expenditures of $18,829 for a riding lawn mower—which VR&E’s current policy prohibits—and other IL goods and services including a bed, bed frame, desktop computer, and woodworking equipment. Without appropriate approval levels, VR&E’s IL track may be vulnerable to potential fraud, In our report, we recommended that VA reassess waste, and abuse.and consider enhancing its current policy concerning the required level of approval for IL track expenditures. VA concurred with our recommendation and said it will use the results of an internal study to determine if changes are needed to its existing cost-review policies or procedures. VA stated that any necessary changes should be implemented by March 2014. VR&E’s case management system—commonly referred to as —does not collect or report critical program management “CWINRS”information that would help the agency in its oversight responsibilities. More specifically, this system does not collect and maintain information on: Costs of IL goods and service purchased: The system does not collect information on the total amount of funds VR&E expends on IL benefits. VR&E aggregates costs across all its tracks, despite VA’s managerial cost accounting policies that require the costs of products and services to be captured for management purposes. Federal financial accounting standards also recommend that costs of programs be measured and reported. According to VA officials, cost information is not collected on the IL track alone because they view the five tracks within VR&E as a single program with the same overarching goal—to help veterans achieve their employment goals. We previously reported on this issue in 2009. At that time, we found that VR&E’s five tracks do not share the same overarching goal. Therefore, we concluded that VR&E should not combine track information. Types of IL benefits provided: The system does not collect information on the types of IL benefits provided to veterans in a standardized manner that can be easily aggregated and analyzed for oversight purposes. In several of the IL track cases we reviewed, the goods and services purchased were grouped together under a general description, such as “IL equipment” or “IL supplies,” without any further details. In addition, we found that controls for data entry were not adequate to ensure that all important data were recorded. For example, we estimated that the service provider field was either missing or unclear for one or more services in about 15 percent of all IL cases that began in fiscal year 2008. Number of IL veterans served: The system does not provide VR&E with the information it needs to monitor its statutory entrant cap and program operations. The law allows VR&E to initiate “programs” of independent living services and assistance for no more than a specified number of veterans each year, which, as of 2012, was set at 2,700. In analyzing VR&E’s administrative data, we found that VR&E counts the number of IL plans developed annually rather than the number of individual veterans admitted to the track. Because multiple IL plans can be developed for an individual veteran during the same fiscal year, veterans with multiple plans may be counted more than once toward the statutory cap. As a result, VR&E lacks complete information on the number of veterans it is serving through the IL track at any given time—information it could use to better manage staff, workloads, and program resources, and ensure that it can effectively manage its cap. Similar to our report’s findings, VR&E’s 2012 evaluation of CWINRS has shown that the system limits VR&E’s oversight abilities and does not capture all important data elements to support the agency’s “evolving business needs.”that the new system modifications will enable them to individually track veterans served through the IL track. However, we found that the CWINRS redesign will not enable VR&E to obtain data on IL track expenditures or the types of goods and services provided. At the time of our review, no specific time frames were provided for the CWINRS redesign, but officials noted it could take up to 3 years to obtain funding for this effort. In our report, we recommended that VA implement an oversight approach that enables VR&E to better ensure consistent administration of the IL track across regions. This approach would include ensuring that CWINRS (1) tracks the types of goods and services provided and their costs, (2) accounts for the number of IL track veterans being served, and (3) contains stronger data entry controls. VA concurred with our recommendation and stated that discussions of system enhancements and the development of ad hoc reports are ongoing. The agency also will be considering a new oversight approach as part of an internal study. Officials told us that they plan to modify CWINRS, and In conclusion, strengthening oversight of VR&E’s IL track is imperative given the wide range of goods and services that can be provided under the law to help veterans with service-connected disabilities improve their ability to live independently when employment is not feasible. More attention at the national level can help ensure that IL track case management requirements are met, the track is administered consistently across regions, expenditures for goods and services are appropriate, and critical information is collected and used to ensure veterans’ IL needs are sufficiently addressed. Chairman Flores, Ranking Member Takano, and Members of the Subcommittee, this concludes my prepared remarks. I would be happy to answer any questions that you or other members of the subcommittee may have. For further information regarding this testimony, please contact Daniel Bertoni at (202) 512-7215, or at bertonid@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Clarita Mrena (Assistant Director), James Bennett, David Chrisinger, David Forgosh, Mitch Karpman, Sheila McCoy, James Rebbe, Martin Scire, Ryan Siegel, Almeta Spencer, Jeff Tessin, Jack Warner, and Ashanta Williams. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Of the 9,215 veterans who entered the Department of Veterans Affairs' (VA) Independent Living (IL) track within the Vocational Rehabilitation and Employment (VR&E) program from fiscal years 2008 to 2011, most were male Vietnam era veterans in their 50s or 60s. The most prevalent disabilities among these veterans were post-traumatic stress disorder and tinnitus ("ringing in the ears"). GAO's review of 182 IL cases from fiscal year 2008 shows that VR&E provided a range of IL benefits to veterans; the most common benefits being counseling services and computers. Less common benefits included gym memberships, camping equipment, and a boat. GAO estimates that VR&E spent nearly $14 million on benefits for veterans entering the IL track in fiscal year 2008--an average of almost $6,000 per IL veteran. About 89 percent of fiscal year 2008 IL veterans were considered by VR&E to be "rehabilitated" by the end of fiscal year 2011; that is, generally, to have completed their IL plans. These plans identify each veteran's independent living goals and the benefits VR&E will provide. The remaining 11 percent of cases were either closed for various reasons, such as the veteran declined benefits, or were still active. Rehabilitation rates across regions varied from 49 to 100 percent, and regions with larger IL caseloads generally rehabilitated a greater percentage of IL veterans. On average, IL plans nationwide were completed in 384 days; however, completion times varied by region, from 150 to 895 days. GAO identified four key areas where VR&E's oversight was limited. First, some regions may not be complying with certain case management requirements. For instance, while VR&E is required to coordinate with the Veterans Health Administration (VHA) on IL benefits, VR&E counselors have difficulty obtaining timely responses from VHA. This has resulted in delayed benefits or VR&E providing the benefits instead of VHA. Second, VR&E does not systematically monitor regional variation in IL caseloads and benefits provided. Instead, it has relied on its quality assurance reviews and ad hoc studies, but these are limited in scope. Third, VR&E's policies for approving IL expenditures may not be appropriate as regions were permitted to purchase a range of items without Central Office approval, some of which were costly or questionable. In one case GAO reviewed, Central Office review was not required for expenditures of $17,500 for a boat, motor, trailer, and the boat's shipping, among other items. Finally, VR&E's case management system does not collect information on IL costs and the types of benefits purchased. VR&E also lacks accurate data on the number of IL veterans served. While the law currently allows up to 2,700 veterans to enter the IL track annually, data used to monitor the cap are based on the number of IL plans developed, not on the number of individual veterans admitted. Since veterans can have more than one IL plan in a fiscal year, one veteran could be counted multiple times towards the cap. VA plans to make modifications to its case management system to address this, but officials noted that it could take up to 3 years to obtain funding for this project. The IL track--one of five tracks within VA's VR&E program--provides a range of non-employment related benefits to help veterans with service-connected disabilities live more independently when employment is not considered feasible at the time they enter the VR&E program. These benefits can include counseling, assistive devices, and other services or equipment. This testimony is based on GAO's report issued in June 2013, and describes (1) the characteristics of veterans in the IL track, and the types and costs of benefits provided; (2) the extent to which their IL plans were completed, and the time it took to complete them; and (3) the extent to which the IL track has been administered appropriately and consistently across regional offices. GAO analyzed VA administrative data from fiscal years 2008 to 2011, and reviewed a random, generalizable sample of 182 veterans who entered the IL track in fiscal year 2008. In addition, GAO visited five VA regional offices; interviewed agency officials and staff; and reviewed relevant federal laws, regulations, and agency policies, procedures, studies, and other documentation. In its June 2013 report, GAO recommended that VR&E explore options to enhance coordination with VHA, strengthen its oversight of the IL track, and reassess its policy for approving benefits. VA agreed with these recommendations.
FHA was established in 1934 under the National Housing Act (P.L. 73-479) to broaden homeownership, shore up and protect lending institutions, and stimulate employment in the building industry. FHA insures private lenders against losses on mortgages that finance purchases of properties with one to four housing units. Many FHA-insured loans are made to low-income, minority, and first-time homebuyers. Generally, borrowers are required to purchase single-family mortgage insurance when the value of the mortgage is large relative to the price of the house. FHA, the Department of Veterans Affairs, and private mortgage insurers provide virtually all of this insurance. In recent years private mortgage insurers and conventional mortgage lenders have begun to offer alternatives to borrowers who want to make little or no down payment. FHA provides most of its single-family insurance through a program supported by the Mutual Mortgage Insurance Fund. The Fund is organized as a mutual insurance fund in that any income received in excess of the amounts required to cover initial insuring costs, operating expenses, and losses due to claims may be paid to borrowers in the form of distributive shares after they pay their mortgages in full or voluntarily terminate their FHA insurance. The economic value of the Fund depends on the relative sizes of cash outflows and inflows over time. Cash flows out of the Fund from payments associated with claims on foreclosed properties, refunds of up-front premiums on mortgages that are prepaid, and administrative expenses for management of the program (see fig. 1). To cover these outflows, FHA deposits cash inflows—up-front and annual insurance premiums from participating homebuyers and net proceeds from the sale of foreclosed properties—into the Fund. If the Fund were to be exhausted, the U.S. Treasury would have to cover lenders' claims and administrative costs directly. The Fund remained relatively healthy from its inception until the 1980s when losses were substantial, primarily because of high foreclosure rates in regions experiencing economic stress, particularly the oil-producing states in the west south central section of the United States. These losses prompted the reforms that were first enacted in November 1990 as part of the Omnibus Budget Reconciliation Act of 1990 (P.L. 101-508). The reforms that were designed to place the Fund on an actuarially sound basis required the Secretary of HUD to take steps to ensure that the Fund attains a capital ratio of 2 percent of the insurance-in-force by November 2000 and maintains that ratio at a minimum at all times thereafter; an independent contractor to conduct an annual actuarial review of the the Secretary of HUD to suspend the payment of distributive shares, which had been paid continuously from 1943 to 1990, until the Fund is actuarially sound; and FHA borrowers to pay more in insurance premiums over the life of their loans by adding a risk-adjusted annual premium to the one time, up- front premium. The Federal Credit Reform Act of 1990, enacted as part of the Omnibus Budget Reconciliation Act of 1990, also reformed budgeting methods for federal credit programs including FHA's mutual insurance program. The 1990 credit reforms were intended to ensure that the full cost of credit activities for the current budget year would be reflected in the federal budget so that the executive branch and the Congress could consider these costs when making annual budget decisions. As a result, FHA's budget is required to reflect the subsidy cost to the government—the estimated long- term cost calculated on a net present value basis—of FHA's loan insurance activities for that year. During the 1990s, the estimated economic value of the Fund—comprised of capital resources and the net present value of future cash flows—grew substantially. As figure 2 shows, by the end of fiscal year 1995, the Fund had attained an estimated economic value that slightly exceeded the amount required for a 2-percent capital ratio. Since that time, the estimated economic value of the Fund has continued to grow and has always exceeded the amount required for a 2-percent capital ratio. As a result of the 1990 housing reforms, the Fund must not only meet capital ratio requirements, but it must also achieve actuarial soundness; that is, the Fund must contain sufficient reserves and funding to cover estimated future losses resulting from the payment of claims on foreclosed mortgages and administrative costs. However, neither the legislation nor the actuarial profession defines actuarial soundness. Price Waterhouse (now PricewaterhouseCoopers) in 1989 concluded that for the Fund to be actuarially sound, it should have capital resources that could withstand losses from reasonably adverse, but not catastrophic, economic downturns. The Price Waterhouse report did not clearly distinguish adverse from catastrophic downturns; however, they said that private mortgage insurers are required to hold contingency reserves to protect against catastrophic losses. In turn, rating agencies require that private mortgage insurers have enough capital on hand to withstand severe losses that would occur if loans they insure across the entire nation had losses similar to those experienced in the west south central states in the 1980s. Because economic downturns put downward pressure on house prices and incomes, they can stress FHA's ability to meet its obligations. Thus, it is reasonable that measures of the financial soundness of the Fund would be based on tests of the Fund's ability to withstand recent recessions or regional economic downturns. In the last 25 years, we have experienced a national recession and regional economic declines that did or could have placed stress on FHA. For example, the nation experienced a recession in 1981 and 1982 that strained mortgage markets. Regionally, states in the west south central portion of the nation experienced an economic decline in 1986 through 1989 precipitated by a sharp drop in the price of crude oil. Similarly, the economic decline experienced by California from 1992 through 1995 placed stress on FHA. Because FHA does substantial business in these regions of the country, these experiences led to substantial losses for FHA. In contrast, the economic decline experienced by the New England states from 1989 through 1991 placed little strain on FHA because insured mortgages in this region do not make up a large portion of FHA's total portfolio. However, experiences similar to the New England downturn, during which the unemployment rate increased by almost 140 percent and house prices decreased by 5.5 percent, could place stress on FHA if they occurred in other regions or the nation as a whole. On the basis of our economic model of FHA's home loan program and forecasts of several key economic factors, we estimate that at the end of fiscal year 1999, the Fund had an economic value of about $15.8 billion. This value, which is 3.20 percent of the unamortized insurance-in-force, reflects the robust economy and relatively high premium rates prevailing through most of the 1990s and the good economic performance forecast for the future. In comparison, Deloitte & Touche estimated that the Fund's 1999 economic value was over $800 million larger than our estimate—or about 3.66 percent of its estimate of FHA's unamortized insurance-in-force. Although we did not evaluate the quality of Deloitte's estimates, we believe that Deloitte's and our estimates are comparable because of the uncertainty inherent in forecasting and the professional judgments made in this type of analysis. However, Deloitte's analysis and ours differ in several ways, including the time when the analyses were performed and some of the assumptions made. Using conservative assumptions, we estimate that at the end of fiscal year 1999, the Fund had an economic value of about $15.8 billion. The economic value of the Fund consists of the capital resources on hand and the net present value of future cash flows. Documents used to prepare FHA's 1999 financial statements show that the Fund had capital resources of about $14.3 billion at the end of that fiscal year. We estimated the relationship between historical FHA foreclosures and prepayments and certain key economic factors to forecast foreclosures and prepayments and the resulting cash flows over the next 30-year period for mortgages insured by FHA before the end of fiscal 1999. As a result of this analysis, we estimate that at the end of 1999 the net present value of future cash flows was about $1.5 billion. Summing the capital resources and future cash flows gives us an economic value of about $15.8 billion. See appendix II for a detailed discussion of the forecasting and cash flow models used to estimate the economic value of the Fund. We also estimate that the Fund's capital ratio—the Fund's economic value divided by its insurance-in-force—exceeded 3-percent at the end of fiscal year 1999. From the individual loan data provided by HUD, we calculated that the unamortized insurance-in-force at the end of fiscal year 1999 was about $494 billion and that the amortized value of that insurance, an estimate of the outstanding balance of the loans and thus FHA's insurance liability, was about $455.8 billion. Therefore, the economic value of the Fund represented 3.20 percent of the unamortized insurance-in-force and about 3.47 percent of the amortized insurance-in-force on September 30, 1999. The robust economy and the increased premium rates established by the 1990 legislation contributed to the strength of the Fund at the end of fiscal year 1999. The Fund's economic value principally reflects the large amount of capital resources that the Fund has accrued. Because current capital resources are the result of previous cash flows, the robustness of the economy and the higher premium rates throughout most of the 1990s accounted for the accumulation of these substantial capital resources. Good economic times that are accompanied by relatively low interest rates and relatively high levels of employment are usually associated with high levels of mortgage activity and relatively low levels of foreclosure; therefore, cash inflows have been high relative to outflows during this period. The estimated value of future cash flows also contributed to the strength of the Fund at the end of fiscal 1999. As a result of relatively low interest rates and the robust economy, FHA insured a relatively large number of mortgages in fiscal years 1998 and 1999. These loans make up a large portion of FHA's insurance-in-force, because many borrowers refinanced their FHA-insured mortgages originated in earlier years, probably as a result of interest rates having fallen to relatively low levels in 1998 and 1999. Because these recent loans have low interest rates and because forecasts of economic variables for the near future show house prices rising while unemployment and interest rates remain fairly stable, our models predict that these new loans will have low levels of foreclosure and prepayment. As a result, our models predict that future cash flows out of the Fund will be relatively small. At the same time, we assume that FHA- insured homebuyers will continue to pay the annual premiums that were reinstituted in 1991. Thus, our models predict that cash flowing into the Fund from mortgages already in FHA's portfolio at the end of fiscal year 1999 will be more than sufficient to cover the cash outflows associated with these loans. As a result, the estimated economic value of the Fund is even higher than the level of its current capital resources. As table 1 shows, Deloitte's independent actuarial analysis of the Fund for fiscal year 1999 estimated a capital ratio that was somewhat higher than ours, 3.66 percent rather than 3.20 percent of unamortized insurance-in- force. Although we did not evaluate the quality of Deloitte's estimates, we did identify some reasons that its estimate of the capital ratio was higher than ours. The ratio is higher because Deloitte estimates both a higher economic value of the Fund and a lower amount of insurance-in-force. Deloitte's higher estimated economic value of the Fund includes a higher estimated value for capital resources on hand that is somewhat offset by a lower estimate of the net present value of future cash flows. Our estimate and that of Deloitte rely on forecasts of foreclosures and prepayments over the next 30 years, and, in turn, these forecasts necessarily rest on forecasts of certain economic factors. In addition, the estimates depend on the choices made concerning a variety of other assumptions. As a result of the inherent uncertainty and the need for professional judgment in this type of analysis, we believe that our estimates and Deloitte's estimates of the Fund's economic value and capital ratio are comparable. Although the estimates are comparable, Deloitte's estimates of capital resources and insurance-in-force differ from ours primarily because the analyses were conducted at different times. Because Deloitte performed its analysis before the end of 1999, it had to estimate some data for which we had year-end values. In particular, Deloitte overestimated the 1999 value of capital resources by extrapolating from the 1998 value. In contrast, we used values developed for FHA's 1999 financial statements that were about $1 billion lower than Deloitte's estimate. Using our value for capital resources, Deloitte's estimated capital ratio would be 3.44 percent rather than 3.66 percent of insurance-in-force. Similarly, Deloitte underestimated the number of loans that FHA insured in the fourth quarter of fiscal year 1999 and, thus, underestimated the value of loans insured for all of fiscal year 1999 by about $33 billion, though this appears to have had little effect on the estimated capital ratio. Our analysis of the net present value of future cash flows and that of Deloitte also differ in several respects. Both our estimates and Deloitte's rely on forecasts of future foreclosures and prepayments. In turn, these forecasts are generated from models that are based on estimated relationships between the probability of loan foreclosure and prepayment and key explanatory factors, such as borrowers' home equity and interest and unemployment rates. Our model differs from Deloitte's in the way that it specifies these relationships. For example, Deloitte specified changes in household income as one of the key explanatory factors, while we did not. The analyses also differ in the assumptions made about some future economic values and costs associated with FHA's insurance program. For example, we assumed lower house price appreciation rates and higher discount rates for calculating net present values than did Deloitte. In addition, the analyses differ in the way that they use HUD's data. We used a sample of individual loans while Deloitte grouped loans into categories to do its analysis. Although these factors could be important in identifying why the two estimates differ, we could not quantify their impact because we did not have access to Deloitte's models. According to our estimates, worse-than-expected loan performance that could be brought on by moderately severe economic conditions would not cause the estimated value of the fund at the end of fiscal year 1999 to decline by more than 2 percent of insurance-in-force. However, a few more severe economic scenarios that we examined could result in such poor loan performance that the estimated value of the fund at the end of fiscal year 1999 could decline by more than 2 percent of insurance-in-force. Two of the three scenarios that showed such a large decline extended adverse conditions more widely than the moderately severe scenarios and, therefore, are less likely to occur. While these estimates suggest that the capital ratios are more than sufficient to protect the Fund at this time from many worse-than-expected loan performance scenarios, factors not fully captured in our models could affect the Fund's ability to withstand worse- than-expected experiences over time. These factors include recent changes in FHA's insurance program and the conventional mortgage market that could affect the likelihood of poor loan performance and the ability of the Fund to withstand that performance. For example, conventional mortgage lenders and private mortgage insurers have recently lowered the required down payment on loans. Such actions may have attracted some lower risk borrowers who would otherwise have insured their loans with FHA. As a result, the overall riskiness of FHA's portfolio may be greater than we have estimated, making a given amount of capital less likely to withstand future economic downturns than we have predicted. Beginning with the robust economy and the value of the Fund in 1999, our analysis shows that a 2-percent capital ratio appears sufficient to withstand worse-than-expected loan performance that results from moderately severe economic scenarios similar to those experienced over the last 25 years. Our model and others that are based on historical experience suggest that falling house prices and high levels of unemployment are likely to produce poor mortgage performance. Thus, to test the Fund's ability to withstand worse-than-expected loan performance, we developed economic scenarios that are based on certain regional downturns and the 1981-82 national recession. We tested the adequacy of the capital ratio using economic scenarios that were based on three recent regional economic downturns—one in the west south central region of the United States that began in 1986, one in New England that began in 1989, and one in California that began in 1992—that produced high mortgage foreclosure rates in those regions. The degree to which these downturns affected the Fund depended on their severity as well as on the volume of mortgages insured by FHA in that region. Thus, while New England suffered a severe downturn in the late 1980s and early 1990s, the Fund did not suffer significantly because the volume of loans that FHA insures in New England represents a small share of FHA's total volume of insured loans. Because regional averages diminish the impact of the adverse economic experience, from each region we selected a state with particularly poor experience as the basis for our scenarios. We also adjusted the scenarios to recognize that the forecasts start from the economic conditions that existed at the end of 1999. See appendix III for further discussion of the scenarios that we used to test the adequacy of FHA's capital ratio. As can be seen in table 2, neither the scenarios that are based on regional downturns nor the scenario that is based on the 1981-82 national recession had much of an effect on the value of the Fund. More specifically, in these worse-than-expected scenarios that are based on specific historical experiences, the estimated capital ratio never falls below 2.8 percent, which is only 0.4 percentage points below our estimated capital ratio using expected economic conditions. However, the national recession had the greatest impact because it affected FHA's entire portfolio. Although the Fund's estimated capital ratio at the end of fiscal year 1999 fell by considerably less than 2 percentage points under economic scenarios that are based on recent regional experiences and the 1981-82 national recession, our model suggests that extensions of some historical regional scenarios to broader regions of the country could cause the capital ratio to fall by more than 2 percentage points. Specifically, to test whether a 2-percent capital ratio could withstand more severe economic conditions, we extended the regional scenarios to two regions and then to the nation as a whole. However, we recognize that these extensions are less likely to occur than the historical scenarios that affected a single region. As table 3 shows, if any of these downturns simultaneously hit two regions where FHA has significant business—the west south central and Pacific regions— the estimated capital ratio would be less than 2 percentage points lower than it would be with expected loan performance. In addition, even if the entire nation experienced a downturn similar to two of the three regional downturns that we analyzed, the estimated capital ratio would still fall by less than 2 percentage points. However, a national downturn as severe as that experienced by Massachusetts from 1989 through 1992 would cause our estimate of the 1999 capital ratio to fall by more than 2 percentage points. Because we were concerned that the historical scenarios we were considering might not be adequate to test the effect of changes in interest rates, we developed two additional scenarios: one in which mortgage interest rates fall and then a recession sets in and one in which mortgage and other interest rates rise to levels that are higher than those in the expected economic conditions scenario. The first scenario is more likely to exhaust a 2-percent capital ratio. Under a scenario in which mortgage interest rates fall and then a recession sets in, the drop in interest rates might induce some homeowners to refinance their mortgages. For those homeowners who refinance outside of FHA, the fund would no longer be accumulating revenue in the form of annual premiums; if the homeowners have not had their mortgages for long, they would receive some premium refunds. Moreover, those borrowers who use FHA's streamline refinance provision that allows borrowers to refinance their mortgages without a new appraisal of their home will likely pay annual premiums for fewer years than if they had not refinanced. So, cash outflows would have increased and cash inflows would have decreased before the recession hits. When the recession hits, cash outflows would increase further because of increased foreclosures among the remaining borrowers. As table 3 shows, our model predicts that the capital ratio would fall substantially—by almost 2 percentage points— under this scenario. A scenario with rising mortgage interest rates will affect various loan types differently. Because the payments on adjustable rate mortgages increase as interest rates rise, there is an increased likelihood that borrowers with these types of mortgages will default. However, since FHA-insured mortgages are assumable, rising interest rates make fixed-rate mortgages more valuable to those borrowers holding them. This decreases the likelihood that borrowers with these types of mortgages will default. Insurance on loans originated in 1998 and 1999 make up 42 percent of FHA's portfolio at the end of fiscal year 1999, and the insured loans are predominately fixed rate mortgages. Consequently, it is not surprising that a rising interest rate scenario leads to an increase in the value of the Fund. Because our economic model did not predict regional or national foreclosure rates as high as those experienced during the 1980s in any of our scenarios, we estimated cash flows using foreclosure rates that more closely matched regional experience during the 1980s. Specifically, we assumed that for mortgages originated from 1989 through 1999, foreclosure rates in 2000 through 2004 would equal those experienced from 1986 through 1990 by FHA-insured loans that originated between 1975 and 1985 in a given region. As table 3 shows, the capital ratio fell to 0.92 percent under this scenario. To test an even more severe scenario, one similar to that used by rating agencies for private mortgage insurers, we also calculated future cash flows assuming that foreclosure rates in 2000 through 2004 extended the very poor performance of the west south central mortgages in the 1980s to ever larger portions of FHA's insurance portfolio. As figure 3 shows, we found that if 36.5 percent of FHA-insured mortgages experienced these high default rates, the estimated capital ratio for fiscal year 1999 would fall by 2 percentage points. If about 55 percent of FHA's portfolio experienced these conditions, a less likely event, the capital ratio would be 0. Because our models are based on the relationship between foreclosures and prepayments and certain economic factors from fiscal years 1975 through 1999, they do not account for the potential impact of recent events, such as changes in FHA's program or in the behavior of the conventional mortgage market. In addition, our models assume that no additional changes in FHA's program or the conventional mortgage market that would affect FHA-insured loans originated through 1999 take place during the forecast period, which extends from fiscal years 2000 through 2028. To the extent that any such changes cause foreclosure and prepayment rates on existing FHA-insured loans to be higher or lower than we have predicted, the Fund's capital ratio would be different under the various scenarios we have discussed. Furthermore, our analysis does not attempt to predict how loans insured by FHA after fiscal year 1999 will behave. Future changes in FHA's program, such as the premium changes adopted as of January 1, 2001, or in the conventional mortgage market may make future loans perform better or worse than we might expect from past experience. In addition, these changes may increase or reduce the amount of cash flowing into the Fund and thus its ability to withstand worse-than-expected loan performance in the future. HUD and the Congress can change FHA's insurance program in a variety of ways, including changes in refund policy and underwriting standards. In fact, HUD and the Congress have taken the following actions in recent years that could affect the Fund in ways that are not accounted for in our models: HUD has suggested that it will reinstitute distributive shares and Members of Congress have introduced bills requiring HUD to take that action. The immediate consequence of this action would be that cash flows out of the Fund would be higher than our estimates. During the late 1990s, the Congress required that FHA implement a new loss mitigation program that encourages lenders to take actions to lower defaults on FHA-insured mortgages. The program requires that lenders provide homebuyers with certain options to avoid foreclosure. While it is hoped that losses from foreclosures will decline as a result of this program, if foreclosure is simply delayed as a result of forbearance, losses could ultimately be larger in the long run. In either case, actual cash flows would likely be different than our estimates. FHA has also reduced up-front premiums for new homeowners who receive financial counseling before buying a home. If the program reduces the likelihood that these homeowners will default, losses would be lower than we have estimated. HUD has taken action to improve the oversight of lenders and better dispose of properties and is continuing to implement new programs in these areas. Better oversight of lenders could mean that losses on existing business would be lower than we have predicted, and better practices for disposing of property could reduce losses associated with foreclosed properties below the level we have estimated. Our models do not look at cash flows associated with loans that FHA would insure after fiscal year 1999. However, recent and future changes in FHA's insurance program will affect the likelihood that these loans will perform differently than past experience suggests they will. If, for example, FHA loosens underwriting standards, there is a greater likelihood that future loans would perform worse than past experience suggests. In addition, changes in premiums, such as the recent reductions in up-front premiums, could reduce cash inflows into the Fund and, therefore, reduce the Fund's ability to withstand poor loan performance. However, this premium change could also lower the riskiness of the loans FHA insures. Recent changes in the conventional mortgage market, especially changes in FHA's competitors' behavior, may also affect the estimates we have made concerning the Fund's ability to withstand adverse economic conditions over the long run. Homebuyers' demand for FHA-insured loans depends, in part, on the alternatives available to them. In recent years, FHA's competitors in the conventional mortgage market—private mortgage insurers and conventional mortgage lenders—are increasingly offering products that compete with FHA's for those homebuyers who are borrowing more than 95 percent of the value of their home. These developments in the conventional mortgage market may have increased the average risk of FHA-insured loans in the late 1990s. In particular, by lowering the required down payment, conventional mortgage lenders and private mortgage insurers may have attracted some borrowers who might otherwise have insured their mortgages with FHA. If, by selectively offering these low down payment loans, conventional mortgage lenders and private mortgage insurers were able to attract FHA's lower-risk borrowers, recent FHA loans with down payments of less than 5 percent may be more risky on average than they have been historically. If this effect, known as adverse selection, has been substantial, the economic value of the Fund may be lower than we estimate, and it may be more difficult for the Fund to withstand worse-than-expected loan performance than our estimates suggest. In addition, should these competitive pressures persist, newly insured loans are likely to perform worse than prior experience would suggest, and then any given capital ratio would be less able to withstand such performance. FHA is taking some action to more effectively compete. For example, FHA is attempting to implement an automated underwriting system that could enhance the ability of lenders underwriting FHA-insured mortgages to distinguish better credit risks from poorer ones. Although this effort is likely to increase the speed with which lenders process FHA- insured loans, it may not improve the risk profile of FHA borrowers unless lenders can lower the price of insurance for better credit risks. Several options are available to the Secretary of HUD under current legislative authority that could result in reducing FHA's capital ratio. Other options would require legislative action. Reliably measuring the impacts of these options on the Fund's capital ratio and FHA borrowers is difficult without using tools designed to estimate the multiple impacts that policy changes often have. While HUD has substantially improved its ability to monitor the financial condition of the Fund, neither the models used by HUD to assess the financial health of the Fund, nor those used by others, explicitly recognize the indirect effects of policy changes on the volume and riskiness of FHA's loans. As a result, the impacts of the various policy options on the federal budget are difficult to discern. However, any option that results in a reduction in the Fund’s reserve, if not accompanied by a similar reduction in other government spending or by an increase in receipts, would result in either a reduction in the surplus or an increase in any existing deficit. There are several changes to the FHA single-family loan program that could be adopted if the Secretary of HUD or the Congress believes that the economic value of the Fund is higher than the amount needed to ensure actuarial soundness. For example, actions that the Secretary could take that could reduce the value of the fund include lowering insurance premiums, adjusting underwriting standards, and reinstituting distributive shares. However, congressional action in the form of new legislation would be required to make other program changes that are not now authorized or clearly contemplated by the statute. These would include actions such as changing the maximum amount FHA-insured homebuyers may borrow relative to the price of the house they are purchasing and using the Fund's reserves for other federal programs. Generally, the Secretary of HUD, in making any authorized changes to the FHA single-family program, must meet certain operational goals. These operational goals include (1) maintaining an adequate capital ratio, (2) meeting the needs of homebuyers with low down payments and first-time homebuyers by providing access to mortgage credit, (3) minimizing the risk to the Fund and to homeowners from homeowner default, and (4) avoiding adverse selection. Reliably estimating the potential effect of various options on the Fund's capital ratio and FHA borrowers is difficult because the impacts of these policy changes are complex and tools available for handling these complexities may not be adequate. Policy changes have not only immediate, straightforward impacts on the Fund and FHA's borrowers, but also more indirect impacts that may intensify or offset the original effect. Implementing these options could affect both the volume and the average riskiness of loans made, which, in turn, could affect any future estimate of the Fund's economic value. As a result of this complexity, obtaining a reliable estimate would likely require that economic models be used to estimate the indirect effects of policy changes. In 1990, the Congress enacted legislation designed to provide better information on the Fund's financial condition. The Omnibus Budget Reconciliation Act requires annual independent actuarial reviews of the Fund and includes credit reforms that require HUD to estimate, for loans originated in a given year, the net present value of the anticipated cash flows over all the years that the loans will be in existence. The models developed by HUD to comply with these requirements are based on detailed analyses of the Fund's historical claim and loss rates and have improved HUD's ability to monitor the financial condition of the Fund. At this time, however, neither the models used by HUD to assess the financial health of the Fund, nor those used by others, explicitly recognize the indirect effects of policy changes on the volume and riskiness of FHA's loans. As a result, HUD cannot reliably estimate the impact of policy changes on the Fund. Although it is difficult to predict the overall impact of a change on the Fund's capital ratio and thus on FHA borrowers as a whole, different options would likely have different impacts on current and prospective FHA-insured borrowers. Many of the proposals to reduce the capital ratio, such as lowering premiums or reinstituting distributive shares, will reduce the price of FHA insurance to the borrower. If no change in the volume of loans FHA insures is considered, then the effect of lowering premiums, for example, clearly would be to lower the economic value of the Fund. However, for two reasons, this price reduction is likely to increase the volume of FHA loans originated, which would increase both premium income and claims against the Fund when some of these new loans default. First, by lowering the price of FHA insurance relative to the price of private mortgage insurance, this premium reduction would likely induce some borrowers who otherwise would have obtained private mortgage insurance to obtain FHA insurance instead, thereby increasing FHA's market share. Second, people who were deferring home purchases because of the high price of FHA insurance might buy homes with FHA insurance once the price is lower. Without a complete analysis of the impact on the volume of loans, reliably estimating the effect of lowering the premiums on the Fund's economic value is difficult. Furthermore, the economic value of the Fund is influenced not only by the volume of loans FHA insures, but also by the riskiness of those loans. Therefore, determining the effect a policy change will have on the economic value of the Fund requires determining how the policy will affect the riskiness of FHA-insured loans. In the case of lowering up-front premiums, for example, the new FHA-insured loans could be less risky than FHA's existing loans. As a result, the new loans would be profitable and offset the direct impact of lower premiums. Generally, private mortgage insurers require that borrowers meet higher credit standards than does FHA. So, to the extent that these new FHA borrowers would have obtained private mortgage insurance without the lower premiums, they are likely to have lower risk profiles than the average for all current FHA borrowers. At the same time, lowering up-front premiums is not likely to attract many additional higher-risk borrowers who would previously not have qualified for FHA-insured loans. Because HUD does not have adequate tools to handle the complexities of estimating the ultimate impact of policy changes on the volume of FHA- insured loans and the riskiness of those loans, these factors are not always considered in assessing the impact of policy changes. For example, assuming that the volume and riskiness of FHA-insured loans will not change, HUD estimates that the recent reductions in up-front premiums combined with the introduction of mortgage insurance cancellation policies will lower the estimated value of the Fund by almost $6 billion over the next 6 years. Because this estimate does not consider the possible changes in the volume of loans that will be insured and the riskiness of those loans, it is an estimate only of the direct impact rather than the full impact of policy changes. Similarly, a recent study presents estimates that lowering up-front premiums to 1.5 percent would result in an almost fivefold increase in the likelihood that cash inflows would be less than outflows over a random 10-year period. However, this study notes that it did not look at how these changes would affect the riskiness of new loans. The complexity of estimating the impact of policy changes on the Fund implies that economic models would be needed to reliably estimate the likely outcomes. The most likely sources for such models would be the studies that compute the economic value of the Fund; however, the models HUD and others have been using to assess the financial health of the Fund do not explicitly recognize the impact of policy changes on the economic value of the Fund. Instead, they assume that FHA's market share remains static. Although it is difficult to predict the overall impact of a change on the Fund's capital ratio and thus on FHA borrowers as a whole, different options would likely have different impacts on various FHA-insured borrowers. Some proposals would more likely benefit existing and future FHA-insured borrowers, while others would benefit only future borrowers, and still others would benefit neither of these groups. One interpretation of the higher premiums that borrowers paid during the period in which the economic value of the fund has been rising is that borrowers during the 1990s “overpaid” for their insurance. Some options for reducing the capital ratio, such as reinstituting distributive shares, would be more likely to compensate these borrowers. Paying distributive shares would benefit certain existing borrowers who voluntarily terminate their mortgages. If these policies continued into the future, they would also benefit future policyholders. Alternatively, reducing up-front premiums, reducing the number of years over which annual insurance premiums must be paid, or relaxing underwriting standards would tend to benefit only future borrowers. Policy options that propose to use some of FHA's capital resources for spending on other programs would benefit neither existing nor future FHA-insured borrowers, but would instead benefit the recipients of those programs receiving the new expenditures. For example, reducing the capital ratio by shifting funds from the Fund to subsidize multifamily housing may primarily benefit renters rather than single-family homeowners. However, over time such a policy could be sustained only so long as FHA borrowers continue to pay premiums higher than the cost to FHA of insuring single-family mortgages. Because of the difficulty in reliably measuring the effect of most actions that could be taken either by the Secretary of HUD or the Congress on the Fund's capital ratio, we cannot precisely measure the effect of these policies on the budget. However, any actions taken by the Secretary or the Congress that influence the Fund's capital ratio will have a similar effect on the federal budget. Specifically, any proposal that results in a reduction in the Fund’s reserve, if not accompanied by a similar reduction in other government spending or by an increase in receipts, would result in either a reduction in the surplus or an increase in any existing deficit. If the Secretary or the Congress adopts policies, such as paying distributive shares or relaxing underwriting standards, that could reduce the profitability of the Fund, both the negative subsidy amount reported in FHA's budget submission and the Fund's reserve would be lower. Some of these policies—such as paying distributive shares—would affect FHA's cash flows immediately. Thus, the amount of money available for FHA to invest in Treasury securities would be lower. The Treasury, in turn, would have less money available for other purposes, and any overall surplus would decline or any deficit would rise. If the amounts of cash flowing out of the Fund exceeded current receipts, FHA would be required to redeem its investments in Treasury securities to make the required payments. The Treasury, then, would be required to either increase borrowing from the public or use general tax revenues to meet its financial obligations to FHA. In either case, any annual budget surplus would be lower or deficit higher. At the end of fiscal year 1999, the Fund had a capital ratio that exceeded 2 percent of FHA's insurance-in-force—the minimum required by law; however, whether the fund was actuarially sound is not so clear. Neither the statute nor HUD has established criteria to determine how severe of a stress the Fund should be able to withstand, that is, what constitutes actuarial soundness. Our results show that as of the end of fiscal year 1999, only the most severe circumstances that we analyzed would cause the current economic value of the Fund to fall below 0. One method of determining actuarial soundness would be to estimate the value of the Fund under various economic and other scenarios. In our analysis, the required minimum capital ratio of 2 percent appears sufficient to cover most of the adverse economic scenarios we tested, although it would not be possible to maintain the minimum under all scenarios. Nonetheless, we urge caution in concluding that the estimated value of the Fund today implies that the Fund could withstand the specified economic scenarios regardless of the future activities of FHA or the market. Our estimates and those of others are valid only under a certain set of conditions, including that loans FHA recently insured respond to economic conditions similarly to those it insured in the more distant past, and that cash inflows associated with future loans at least offset outflows associated with those loans. However, HUD is changing several policies that may affect the volume and quality of its future business. Further, adverse economic events cannot be predicted with certainty; therefore, we cannot attach a likelihood to any of the scenarios that we tested (even though we recognize that it is less likely that a severe economic downturn will affect the whole nation than one or two regions). It is instructive to remember in considering the uncertainty of the future, that the Fund had an even higher capital ratio in 1979 when the economic value of the Fund equaled 5.3 percent of insurance-in-force, but in little more than a decade— after a national recession, the substitution of an up-front premium for annual insurance premiums, and regional real estate declines—the economic value of the Fund was negative. Thus, it is important to periodically reevaluate the actuarial soundness of the Fund. Today, FHA knows more about the condition of the Fund but could still improve its evaluation of the impact that unexpected economic downturns and policy changes may have on the Fund. HUD has already taken some action that it estimates will lower the value of the Fund, including reducing up-front insurance premiums on newly insured mortgages. HUD has done so without the tools necessary to reliably measure the multiple impacts that these policies are likely to have. While the direct impact of policies that are likely to reduce the Fund's capital ratio can be estimated with the models used in the actuarial reviews, those models cannot isolate the indirect effects on the volume of loans insured by FHA and the riskiness of those loans. The Congress may want to consider taking action to amend the laws governing the Fund to specify criteria for determining when the Fund is actuarially sound. Because we believe that actuarial soundness depends on a variety of factors that could vary over time, setting a minimum or target capital ratio will not guarantee that the Fund will be actuarially sound over time. For example, if the Fund were comprised primarily of seasoned loans with known characteristics, a capital ratio below the current 2-percent minimum might be adequate, but under conditions such as those that prevail today, when the Fund is comprised of many new loans, a 2-percent ratio might be inadequate if recent and future loans perform considerably worse than expected. Thus, the Congress may want to consider defining the types of economic conditions under which the Fund would be expected to meet its commitments without borrowing from the Treasury. If the Congress decides that no further guidance is necessary, to better evaluate the health of the Fund and determine the appropriate types and timing of policy changes, we recommend that HUD develop criteria for measuring the actuarial soundness of the Fund. These criteria should specify the economic conditions that the Fund would be expected to withstand and may specify capital ratios currently consistent with those criteria. Because many conditions affect the adequacy of a given capital ratio, we recommend that the independent annual actuarial analysis give more attention to tests of the Fund's ability to withstand appropriate stresses. These tests should include more severe scenarios that capture worse-than- expected loan performance that may be due to economic conditions and other factors, such as changes in policy and the conventional mortgage market. To more fully assess the impact of policy changes that are likely to permanently affect the profitability of certain FHA-insured loans, we recommend that the Secretary of HUD develop better tools for assessing the impact these changes may have on the volume and riskiness of loans that FHA insures. Such analysis is particularly important where the policy change permanently affects certain loans, as in the case of underwriting and premium changes. Without a better analytical framework to assess the full impact of policy changes that permanently affect certain loans, we recommend that such changes be made in small increments so that their impact can be monitored and adjustments can be made over time. We provided a draft of this report to the Secretary of HUD for his review and comment. HUD agreed with the report's findings regarding the estimated value of the fund, and the ability of the fund to withstand moderately severe economic downturns that could lead to worse-than- expected loan performance. However, HUD expressed concern that the report did not note the probability of the most stressful scenarios we tested and FHA's ability to react to adverse developments. HUD also thought our reference to the substantial decline in the capital ratio that occurred during the 1980s left a false impression that the Fund is currently in jeopardy. In addition, HUD expressed concern that the report did not fully recognize the improvements it has made in analyzing policy changes and monitoring the performance of the Fund and disagreed with our recommendations. HUD's letter is reproduced in appendix IV. In response, we clarified that scenarios in which we extend historical adverse economic conditions more widely are less likely to occur. However, we cannot attribute a probability to any scenario we used. We also acknowledge that the annual actuarial reviews and the annual reestimates of the Fund required under the housing and credit reforms of 1990 enable HUD to better monitor the performance of the Fund and, therefore, react to adverse developments. However, we remain concerned that HUD's analyses of policy changes do not fully recognize the impact that these policy changes may have on the volume of loans FHA will insure and the riskiness of those loans. We also disagree that the reference to the decline in the capital ratio experienced in the 1980s implies that the Fund is in jeopardy today. In fact, this example serves to illustrate that changes in the economy and HUD policy can have a dramatic impact on the value of the Fund. With regard to our recommendation that HUD develop criteria for measuring the actuarial soundness of the Fund, HUD seems to infer that we believe a static capital ratio should be the criterion for measuring actuarial soundness. We do not recommend a static capital ratio for measuring actuarial soundness. Rather, we believe that it is important to measure actuarial soundness under different economic and other scenarios; therefore, we recommend that HUD specify the conditions that the Fund would be expected to withstand. We revised this recommendation to make clear that the definition of actuarial soundness should consider the economic conditions that the Fund would be expected to withstand. Regarding our recommendation that the independent annual actuarial analysis give more attention to tests of the Fund's ability to withstand appropriate stresses, HUD noted that it believed it was already complying with this recommendation and asked that our report define more specifically what tests are needed. In response, we clarified that the annual actuarial review should include more severe scenarios that capture worse- than-expected loan performance that may be due to economic conditions and other factors, such as changes in HUD policy and the conventional mortgage market. HUD's recent actuarial analysis included two scenarios—an interest rate spike scenario and a lower house price appreciation scenario—for testing the value of the Fund under a stressed economic state, and in neither scenario do house prices decline or unemployment rates rise. With regard to our recommendation concerning tools for assessing the impact of policy changes, HUD disagreed that any tools are needed beyond those that it already has. Specifically, HUD cites the annual analyses done in compliance with the Federal Credit Reform Act of 1990 and its annual actuarial reviews that already focus on policy changes. Further, HUD notes that it has made its program data more accessible for policy analysis through the creation of the Single Family Data Warehouse. However, we remain concerned that HUD does not have adequate tools for assessing the full impact that policy changes may have. Tools such as models for estimating the change in demand and the risk characteristics of future loans would enable HUD to better estimate the full impact that policy changes may have on the value of the Fund. HUD also disagreed with the idea that any policy actions it takes should be only incremental and reversible. We revised our recommendation to make clear that incremental changes are appropriate where a policy change permanently affects certain loans. Copies of this report will be distributed to interested congressional committees; the Honorable Mel Martinez, Secretary of the Department of Housing and Urban Development; the Honorable Mitchell E. Daniels, Jr., the Director of the Office of Management and Budget; and the Honorable Dan L. Crippen, the Director of the Congressional Budget Office. We will also make copies available to others on request. If you or your staff have any questions about this report, please contact me at (202) 512-8678. Key contributors to this report are listed in appendix V. To estimate the economic value of the Federal Housing Administration's (FHA) Fund as of September 30, 1999, and its resulting capital ratio, we developed econometric and cash flow models. These models were based on models that we developed several years ago for this purpose. In developing the earlier models, we examined existing studies of the single- family housing programs of both the Department of Housing and Urban Development (HUD) and the Department of Veterans Affairs (VA); academic literature on the modeling of mortgage foreclosures and prepayments; and previous work that Price Waterhouse (now PricewaterhouseCoopers), HUD, VA, ourselves, and others had performed on modeling government mortgage programs. For our current analysis, we modified our previous models on the basis of our examination of work performed recently by PricewaterhouseCoopers, Deloitte & Touche, and others; discussions we held with analysts familiar with modeling mortgage foreclosures and prepayments; and program changes made by FHA since our previous work was performed. For these models, we used data supplied by FHA and Standard & Poor’s DRI, a private economic forecasting company. We also used information from FHA’s independent actuarial reviews in our analysis. Our econometric analysis estimated the historical relationships between the probability of loan foreclosure and prepayment and key explanatory factors, such as the borrower's equity and the interest rate. To estimate these relationships, we used HUD’s A-43 data on the default and prepayment experience of FHA-insured home mortgage loans that originated from fiscal years 1975 through 1999. To test the validity of our econometric models, we examined how well the models predicted the actual rates of FHA's loan foreclosures and prepayments through fiscal year 1999. We found that our predicted rates closely resembled the actual rates. Next, we used our estimates of these relationships and forecasts of future economic conditions provided by Standard & Poor’s DRI to develop a baseline forecast of future loan foreclosures and prepayments for loans that were active at the end of fiscal year 1999. To estimate the net present value of future cash flows of the Fund under expected economic conditions, we used our forecast of future loan foreclosures and prepayments in conjunction with a cash flow model that we developed to measure the primary sources and uses of cash for loans that originated from fiscal years 1975 through 1999. Our cash flow model was constructed to estimate cash flows for each policy year through the life of a mortgage. An important component of the model was the conversion of all income and expense streams—regardless of the period in which they are actually forecasted to occur—into their 1999 present value equivalents. We then added the forecasted 1999 present values of the future cash flows to the current cash available to the Fund, which we obtained from documents used to prepare FHA's 1999 audited financial statements, to estimate the Fund's economic value and resulting capital ratio. A detailed discussion of our models and methodology for estimating the economic value and capital ratio of the Fund appears in appendix II. To compare our estimates of the Fund's economic value and capital ratio with the estimates prepared for FHA by Deloitte & Touche, we reviewed Deloitte's report and met with its analysts and HUD officials to learn more about that study's methodology, data, and assumptions. To determine the extent to which a capital ratio of 2 percent would allow the Fund to withstand worse-than-expected loan performance, we developed various scenarios for future economic conditions that we anticipated would result in substantially worse loan performance than we forecasted in our scenario using expected economic conditions. We based these scenarios on the economic conditions that led to episodes of relatively high foreclosure rates for FHA single-family loans in certain regions of the country at different times during the 1975 through 1999 period and on those experienced nationally during the 1981-82 recession. We developed additional scenarios that extended the adverse regional economic conditions to larger sections of the country to analyze how well the Fund could withstand conditions even worse than what we had experienced in the past 25 years. We also developed some additional scenarios with even higher foreclosure rates to further analyze the Fund's ability to withstand adverse conditions. Under each of the scenarios that we developed, we used our estimated relationships between foreclosure and prepayment rates and various explanatory factors, and the future economic conditions implied by the scenarios, to forecast future foreclosures and prepayments for loans that were active at the end of fiscal year 1999. We then used these forecasts, in conjunction with our cash flow model, to estimate the economic value and capital ratio of the Fund under each scenario. The difference between these estimates and our estimate under expected economic conditions shows whether each scenario is likely to result in a reduction of the Fund's economic value of more than 2 percent and, therefore, whether a 2-percent capital ratio is likely to be sufficient to allow the Fund to withstand the worse-than-expected loan performance associated with such a scenario. Our analysis of the adequacy of FHA’s capital ratio is limited to the performance of loans in FHA's portfolio as of the end of fiscal year 1999. That is, our analysis assesses the likelihood that an economic value of 2 percent of the unamortized insurance-in-force would be sufficient to cover the excess of future payments over future cash inflows (on a net present value basis) on those loans if they perform worse than expected. Our analysis of the ability of the Fund to withstand various adverse economic conditions requires making the assumption that the adverse conditions would not also cause loans insured by FHA after fiscal year 1999 to be an economic drain on the Fund. Since the 1990 reforms, the cash flows associated with each year’s loans have been estimated to have a positive economic value, thereby adding to the economic value of the entire Fund. However, during adverse economic times, new loans might perform worse than loans that were insured by FHA during the 1990s. If the newly insured loans perform so poorly that they have a negative economic value, then the loss to the Fund in any of the adverse economic scenarios that we have considered would be greater than what we have estimated. Alternatively, if the newly insured loans have positive economic values, then the Fund would continue to grow. To identify other factors, such as recent program and market changes, that could cause worse-than-expected loan performance, we reviewed the laws and regulations governing FHA’s insurance program, studied recent actuarial reviews of the Fund, and interviewed experts. We considered these other factors because the relationships estimated in our econometric models are based on historical relationships since 1975. As a result, these models might not capture the effects of recent changes in FHA programs or the conventional mortgage market on the likelihood that loans insured in the late 1990s will foreclose or prepay. In addition, our forecasts of future cash flows assume that FHA’s program and the private mortgage market will not change over the 30-year forecast period in any way that would affect FHA-insured loans originated through 1999. To identify options for adjusting the size of the Fund and determining the impact that these options might have, we reviewed the laws and regulations governing FHA’s insurance program and proposals to use the Fund’s economic value or otherwise change FHA’s insurance program. Additionally, we interviewed experts both within and outside the federal government. When available, we collected HUD’s estimates of the impact of various options on the Fund and the estimates of other experts. To determine the impact of these changes on the federal budget, we relied on our own experts as well as those at the Office of Management and Budget and the Congressional Budget Office. We conducted our review from December 1999 to February 2001 in accordance with generally accepted government auditing standards. We built econometric and cash flow models to estimate the economic value of HUD's FHA's Mutual Mortgage Insurance Fund (Fund) as of the end of fiscal year 1999. The goal of the econometric analysis was to forecast mortgage foreclosure and prepayment activity, which affect the flow of cash into and out of the Fund. We forecasted activity for all loans active at the end of fiscal year 1999 for each year from fiscal years 2000 to 2028 on the basis of assumptions stated in this appendix. We estimated equations from data covering fiscal years 1975 through 1999 that included all 50 states and the District of Columbia, but excluded U.S. territories. Our econometric models used observations on loan years—that is, information on the characteristics and status of an insured loan during each year of its life—to estimate conditional foreclosure and prepayment probabilities. These probabilities were estimated using observed patterns of prepayments and foreclosures in a large set of FHA-insured loans. More specifically, our model used logistic equations to estimate the logarithm of the odds ratio, from which the probability of a loan's payment (or a loan's prepayment) in a given year can be calculated. These equations are expressed as a function of interest and unemployment rates, the borrower's equity (computed using a house's price and current and contract interest rates as well as a loan's duration), the loan-to-value (LTV) ratio, the loan's size, the geographic location of the house, and the number of years that the loan has been active. The results of the logistic regressions were used to estimate the probabilities of a loan being foreclosed or prepaid in each year. FHA pays a claim on a foreclosed mortgage and sometimes, depending on the age of the loan, refunds a portion of the up-front premium when a mortgage prepays. These two actions contribute to cash outflows. Cash inflows are generated when FHA sells foreclosed properties and when borrowers pay mortgage insurance premiums. We forecasted the cash flows into and out of the Fund on the basis of our foreclosure and prepayment models and key economic variables. We then used the forecasted cash flows, including an estimate of interest that would be earned, and the Fund's capital resources to estimate the economic value of the Fund. We prepared separate estimates for fixed-rate mortgages, adjustable rate mortgages (ARMs), and investor loans. The fixed-rate mortgages with terms of 25 years or more (long-term loans) were divided between those that refinanced and those that were purchase money mortgages (mortgages associated with home purchase). Separate estimates were prepared for each group of long-term loans. Likewise, investor loans were divided between mortgages that refinanced and the loans that were purchase money mortgages. We prepared separate estimates for each group of investor loans (refinanced and purchase money mortgages). A separate analysis was also prepared for loans with terms that were less than 25 years (short-term loans). A complete description of our models, the data that we used, and the results that we obtained is presented in detail in the following sections. In particular, this appendix describes (1) the sample data that we used; (2) our model specification and the independent variables in the regression models; (3) the model results; (4) the cash flow model, with emphasis on key economic variables; and (5) a sensitivity analysis that demonstrates the sensitivity of our forecasts to the values of some key variables. For our analysis, we selected from FHA's computerized files a 10-percent sample of records of mortgages insured by FHA from fiscal years 1975 through 1999 (1,465,852 loans). For the econometric models related to long-term, fixed-rate mortgages, we used 25 percent of the long-term loans in our sample. From the FHA records, we obtained information on the initial characteristics of each loan, such as the year of the loan's origination and the state in which the loan originated; LTV ratio; loan amount; and contract interest rates. We categorized the loans as foreclosed, prepaid, or active as of the end of fiscal year 1999. To describe macroeconomic conditions at the national and state levels, we obtained data from Standard & Poor's DRI, by state, on annual civilian unemployment rates and data from the 2000 Economic Report of the President on the implicit price deflator for personal consumption expenditures. We used Standard & Poor's DRI data on quarterly interest rates for 30-year mortgages on existing housing along with its forecast data, at the state level, on median house prices and civilian unemployment rates, and at the national level, on interest rates on 1- and 10-year U.S. Treasury securities. People buy houses for consumption and investment purposes. Normally, people do not plan to default on loans. However, conditions that lead to defaults do occur. Defaults may be triggered by a number of events, including unemployment, divorce, or death. These events are not likely to trigger defaults if the owner has positive equity in his/her home because the sale of the home with realization of a profit is better than the loss of the home through foreclosure. However, if the property is worth less than the mortgage, these events may trigger defaults. Prepayments of home mortgages can also occur. These may be triggered by events such as declining interest rates, which prompts refinancing, and rising house prices, which prompts the take out of accumulated equity or the sale of the residence. Because FHA mortgages are assumable, the sale of a residence does not automatically trigger prepayment. For example, if interest rates have risen substantially since the time that the mortgage was originated, a new purchaser may prefer to assume the seller's mortgage. We hypothesized that foreclosure behavior is influenced by, among other things, the (1) level of unemployment, (2) size of the loan, (3) value of the home, (4) current interest rates, (5) contract interest rates, (6) home equity, and (7) region of the country within which the home is located. We hypothesized that prepayment behavior is influenced by, among other things, the (1) difference between the interest rate specified in the mortgage contract and the mortgage rates generally prevailing in each subsequent year, (2) amount of accumulated equity, (3) size of the loan, and (4) region of the country in which the home is located. Our first regression model estimated conditional mortgage foreclosure probabilities as a function of a variety of explanatory variables. In this regression, the dependent variable is a 0/1 indicator of whether a given loan was foreclosed in a given year. The outstanding mortgage balance, expressed in inflation-adjusted dollars, weighted each loan-year observation. Our foreclosure rates were conditional on whether the loan survives an additional year. We estimated conditional foreclosures in a logistic regression equation. Logistic regression is commonly used when the variable to be estimated is the probability that an event, such as a loan's foreclosure, will occur. We regressed the dependent variable (whose value is 1 if foreclosure occurs and 0 otherwise) on the explanatory variables previously listed. Our second regression model estimated conditional prepayment probabilities. The independent variables included a measure that is based on the relationship between the current mortgage interest rate and the contract rate, the primary determinant of a mortgage's refinance activity. We further separated this variable between ratios above and below 1 to allow for the possibility of different marginal impacts in higher and lower ranges. The variables that we used to predict foreclosures and prepayments fall into two general categories: descriptions of states of the economy and characteristics of the loan. In choosing explanatory variables, we relied on the results of our own and others’ previous efforts to model foreclosure and prepayment probabilities and on implications drawn from economic principles. We allowed for many of the same variables to affect both foreclosure and prepayment. The single most important determinant of a loan’s foreclosure is the borrower’s equity in the property, which changes over time because (1) payments reduce the amount owed on the mortgage and (2) property values can increase or decrease. Equity is a measure of the current value of a property compared with the current value of the mortgage on that property. Previous research strongly indicates that borrowers with small amounts of equity, or even negative equity, are more likely than other borrowers to default. We computed the percentage of equity as 1 minus the ratio of the present value of the loan balance evaluated at the current mortgage interest rate, to the current estimated house price. For example, if the current estimated house price is $100,000, and the value of the mortgage at the current interest rate is $80,000, then equity is .2 (20 percent), or 1-(80/100). To measure equity, we calculated the value of the mortgage as the present value of the remaining mortgage, evaluated at the current year's fixed-rate mortgage interest rate. We calculated the value of a property by multiplying the value of that property at the time of the loan’s origination by the change in the state's median nominal house price, adjusted for quality changes, between the year of origination and the current year. Because the effects on foreclosure of small changes in equity may differ depending on whether the level of equity is large or small, we used a pair of equity variables, LAGEQHIGH and LAGEQLOW, in our foreclosure regression. The effect of equity is lagged 1 year, as we are predicting the time of foreclosure, which usually occurs many months after a loan first defaults. We anticipated that higher levels of equity would be associated with an increased likelihood of prepayment. Borrowers with substantial equity in their home may be more interested in prepaying their existing mortgage and taking out a larger one to obtain cash for other purposes. Borrowers with little or no equity may be less likely to prepay because they may have to take money from other savings to pay off their loan and cover transaction costs. For the prepayment regression, we used a variable that measures book equity—the estimated property value less the amortized balance of the loan—instead of market equity. It is book value, not market value, that the borrower must pay to retire the debt. Additionally, the important effect of interest rate changes on prepayment is captured by two other equity variables, RELEQHI and RELEQLO, which are sensitive to the difference between a loan's contract rate and the interest rate on 30-year mortgages available in the current year. These variables are described below. We included an additional set of variables in our regressions related to equity: the initial LTV ratio. We entered LTV as a series of dummy variables, depending on its size. Loans fit into eight discrete LTV categories. In some years, FHA measured LTV as the loan amount less mortgage insurance premium financed in the numerator of the ratio and appraised value plus closing costs in the denominator. To reflect true economic LTV, we adjusted FHA’s measure by removing closing costs from the denominator and including financed premiums in the numerator. A borrower’s initial equity can be expressed as a function of LTV, so we anticipated that if LTV was an important predictor in an equation that also includes a variable measuring current equity, it would probably be positively related to the probability of foreclosure. One reason for including LTV is that it measures initial equity accurately. Our measures of current equity are less accurate because we do not have data on the actual rate of change in the mortgage loan balance or the actual rate of house price change for a specific house. Loans with higher LTVs are more likely to foreclose. For the long-term nonrefinanced equation, the ARM equation, and the short-term equation, we deleted the lower category of LTV loans. We expected LTV to have a positive sign in the foreclosure equations at higher levels of LTV. LTV in our foreclosure equations may capture the effects of income constraints. We were unable to include borrowers’ income or payment-to-income ratio directly because data on borrowers’ income were not available. However, it seems likely that borrowers with little or no down payment (high LTV) are more likely to be financially stretched in meeting their payments and, therefore, more likely to default. The anticipated relationship between LTV and the probability of prepayment is uncertain. For some loan type categories, we used down payment information directly, rather than the series of LTV variables. We defined down payment to ensure that closing costs were included in the loan amount and excluded from the house price. We used the annual unemployment rates for each state for the period from fiscal years 1975 through 1999 to measure the relative condition of the economy in the state where a loan was made. We anticipated that foreclosures would be higher in years and states with higher unemployment rates and that prepayments would be lower because property sales slow down during recessions. The actual variable we used in our regressions, LAGUNEMP, is defined as the logarithm of the preceding year’s unemployment rate in that state. We included the logarithm of the interest rate on the mortgage as an explanatory variable in the foreclosure equation. We expected a higher interest rate to be associated with a higher probability of foreclosure because high interest rates cause a higher monthly payment. However, in explaining the likelihood of prepayment, our model uses information on the level of current mortgage rates relative to the contract rate on the borrower's mortgage. A borrower's incentive to prepay is high when the interest rate on a loan is greater than the rate at which money can now be borrowed, and it diminishes as current interest rates increase. In our prepayment regression, we defined two variables, RELEQHI and RELEQLO. RELEQHI is defined as the ratio of the market value of the mortgage to the book value of the mortgage but is never smaller than 1. RELEQLO is also defined as the ratio of the market value of the mortgage to the book value but is never larger than 1. When currently available mortgage rates are lower than the contract interest rate, market equity exceeds book equity because the present value of the remaining payments evaluated at the current rate exceeds the present value of the remaining payments evaluated at the contract rate. Thus, RELEQHI captures a borrower’s incentive to refinance, and RELEQLO captures a new buyer’s incentive to assume the seller’s mortgage. We created two 0/1 variables, REFIN and REFIN2, that take on a value of 1 if a borrower had not taken advantage of a refinancing opportunity in the past and 0 otherwise. We defined a refinancing opportunity as having occurred if the interest rate on fixed-rate mortgages in any previous year in which a loan was active was at least 200 basis points below the rate on the mortgage in any year up through 1994 or 150 basis points below the rate on the mortgage in any year after 1994. REFIN takes a value of 1 if the borrower had passed up a refinancing opportunity at least once in the past. REFIN2 takes on a value of 1 if the borrower had passed up two or more refinancing opportunities in the past. Several reasons might explain why borrowers passed up apparently profitable refinancing opportunities. For example, if they had been unemployed or their property had fallen in value they might have had difficulty obtaining refinancing. This reasoning suggests that REFIN and REFIN2 would be positively related to the probability of foreclosure; that is, a borrower unable to obtain refinancing previously because of poor financial status might be more likely to default. Similar reasoning suggests a negative relationship between REFIN and REFIN2 and the probability of prepayment; a borrower unable to obtain refinancing previously might also be unlikely to obtain refinancing currently. A negative relationship might also exist if a borrower’s passing up one profitable refinancing opportunity reflected a lack of financial sophistication that, in turn, would be associated with passing up additional opportunities. However, a borrower who anticipated moving soon might pass up an apparently profitable refinancing opportunity to avoid the transaction costs associated with refinancing. In this case, there might be a positive relationship, with the probability of prepayment being higher if the borrower fulfilled his/her anticipation and moved, thereby prepaying the loan. Another explanatory variable is the volatility of interest rates, INTVOL, which is defined as the standard deviation of the monthly average of the Federal Home Loan Mortgage Corporation’s series of 30-year, fixed-rate mortgage effective interest rates. We calculated the standard deviation over the previous 12 months. Financial theory predicts that borrowers are likely to refinance more slowly at times of volatile rates because there is a larger incentive to wait for a still-lower interest rate. We also included the slope of the yield curve, YC, in our prepayment estimates, which we calculated as the difference between the 1- and 10- year Treasury rates of interest. We then subtracted 250 basis points from this difference and set differences that were less than 0 to 0. This variable measured the relative attractiveness of ARMs versus fixed-rate mortgages; the steeper the yield curve, the more attractive ARMs would be. When ARMs have low rates, borrowers with fixed-rate mortgages may be induced into refinancing into ARMs to lower their monthly payments. For ARMs, we did not use relative equity variables as we did with fixed-rate mortgages. Instead, we defined four variables, CHANGEPOS, CHANGENEG, CAPPEDPOS, and CAPPEDNEG, to capture the relationship between current interest rates and the interest rate paid on each mortgage. CHANGEPOS measures how far the interest rate on the mortgage has increased since origination, with a minimum of 0, while CHANGENEG measures how far the rate has decreased, with a maximum of 0. CAPPEDPOS measures how much farther the interest rate on the mortgage would rise, if prevailing interest rates in the market did not change, while CAPPEDNEG measures how much farther the mortgage’s rate would fall, if prevailing interest rates did not change. For example, if an ARM was originated at 7 percent and interest rates increased by 250 basis points 1 year later, CHANGEPOS would equal 100 because FHA’s ARMs can increase by no more than 100 basis points in a year. CAPPEDPOS would equal 150 basis points, since the mortgage rate would eventually increase by another 150 basis points if market interest rates did not change, and CHANGENEG and CAPPEDNEG would equal 0. Because interest rates have generally trended downwards since FHA introduced ARMs, there is very little experience with ARMs in an increasing interest rate environment. We created nine 0/1 variables to reflect the geographic distribution of FHA loans and included them in both regressions. Location differences may capture the effects of differences in borrowers’ income, underwriting standards by lenders, economic conditions not captured by the unemployment rate, or other factors that may affect foreclosure and prepayment rates. We assigned each loan to one of the nine Bureau of the Census (Census) divisions on the basis of the state in which the borrower resided. The Pacific division was the omitted category; that is, the regression coefficients show how each of the regions was different from the Pacific division. We also created a variable, JUDICIAL, to indicate states that allowed judicial foreclosure procedures in place of nonjudicial foreclosures. We anticipated that the probability of foreclosure would be lower where judicial foreclosure procedures were allowed because of the greater time and expense required for the lender to foreclose on a loan. To obtain an insight into the differential effect of relatively larger loans on mortgage foreclosures and prepayments, we assigned each loan to 1 of 10 loan-size categorical variables (LOAN1 to LOAN10). The omitted category in our regressions was loans between $80,000 and $90,000, and results on loan size are relative to those loans between $80,000 and $90,000. All dollar amounts are inflation-adjusted and represent 1999 dollars. The number of units covered by a single mortgage was a key determinate in deciding which loans were more likely to be investor loans. Loans were noted as investor loans if the LTV ratio was between specific values, depending on the year of the loan, or if there were two or more units covered by the loan. Once a loan was identified as an investor loan, we separated the refinanced loans from the purchase money mortgages and performed foreclosure and payoff analyses on each. For each of the investor equations, we used two dummy variables defined according to the number of units in the dwelling. LIVUNT2 has the value of 1 when a property has two dwelling units and a value of 0 otherwise. LIVUNT3 has a value of 1 when a property has three or more dwelling units and a value of 0 otherwise. The missing category in our regressions was investors with one unit. Our database covers only loans with no more than four units. To capture the time pattern of foreclosures and prepayments (given the effects of equity and the other explanatory variables), we defined seven variables on the basis of the number of years that had passed since the year of the loan’s origination. We refer to these variables as YEAR1 to YEAR7 and set them equal to 1 during the corresponding policy year and 0 otherwise. Finally, for those loan type categories for which we did not estimate separate models for refinancing loans and nonrefinancing loans, we created a variable called REFINANCE DUMMY to indicate whether a loan was a refinancing loan. Table 4 summarizes the variables that we used to predict foreclosures and prepayments. Table 5 presents mean values for our predictor variables for each mortgage type for which we ran a separate regression. As previously described, we used logistic regressions to model loan foreclosures and prepayments as a function of a variety of predictor variables. We estimated separate regressions for fixed-rate purchase money mortgages (and refinanced loans) with terms over and under 25 years, ARMs, and investor loans. We used data on loan activity throughout the life of the loans for loans originated from fiscal years 1975 through 1999. The outstanding loan balance of the observation weighted the regressions. The logistic regressions estimated the probability of a loan being foreclosed or prepaid in each year. The standard errors of the regression coefficients are biased downward because the errors in the regressions are not independent. The observations are on loan years, and the error terms are correlated because the same underlying loan can appear several times. However, we did not view this downward bias as a problem because our purpose was to forecast the dependent variables, not to test hypotheses concerning the effects of independent variables. In general, our results are consistent with the economic reasoning that underlies our models. Most importantly, the probability of foreclosure declines as equity increases, and the probability of prepayment increases as the current mortgage interest rate falls below the contract mortgage interest rate. As shown in tables 6 and 7, both of these effects occur in each regression model and are very strong. These tables present the estimated coefficients for all of the predictor variables for the foreclosure and prepayment equations. Table 6 shows our foreclosure regression results. As expected, the unemployment rate is positively related to the probability of foreclosure and negatively related to the probability of prepayment. Our results also indicate that generally the probability of foreclosure is higher when LTV and contract interest rate are higher. The overall goodness of fit was satisfactory: Chi-Square statistics were significant on all regressions at the 0.01-percent level. Because the coefficients from a nonlinear regression can be difficult to interpret, we transformed some of the coefficients for the long-term, nonrefinanced, fixed-rate regressions into statements about changes in the probabilities of foreclosure and prepayment. Overall conditional foreclosure probabilities for this mortgage type are estimated to be about 0.5 percent. In other words, on average, there is a ½ of a 1-percent chance for a loan of this type to result in a claim payment in any particular year. By holding other predictor variables at their mean values, we can describe the effect on the conditional foreclosure probability of changes in the values of predictor variables of interest. For example, if the average value of the unemployment rate were to increase by 1 percentage point from its mean value (in our sample) of about 6 percent to about 7 percent, the conditional foreclosure probability would increase by about 20 percent (from 0.5 percent to about 0.6 percent). Similarly, a 1-percentage-point increase in the mortgage contract rate from its mean value of about 9.25 to about 10.25 would also raise the conditional foreclosure probability by 20 percent (from about 0.5 percent to about 0.6 percent). Values of homeowners' equity of 10 percent, 20 percent, 30 percent, and 40 percent result in conditional foreclosure probabilities of 0.8 percent, 0.7 percent, 0.5 percent, and 0.3 percent, respectively, illustrating the importance of increased equity in reducing the probability of foreclosure. *B), where X refers to the mean value of the ith explanatory variable and the Bs are the estimated coefficients. the loan dollars outstanding will prepay, on average. Prepayment probability is quite sensitive to the relationship between the contract interest rate and the currently available mortgage rates. We modeled this relationship using RELEQHI and RELEQLO. Holding other variables at their mean values, if the spread between mortgage rates available in each year and the contract interest rate widened by one percentage point, the conditional prepayment probability would increase by about 80 percent to 8.6 percent. To test the validity of our model, we examined how well the model predicted actual patterns of FHA’s foreclosure and prepayment rates through fiscal year 1999. Using a sample of 10 percent of FHA’s loans made from fiscal years 1975 to 1999, we found that our predicted rates closely resembled actual rates. The economic value of the Fund is defined in the Omnibus Budget Reconciliation Act of 1990 as the “current cash available to the Fund, plus the net present value of all future cash inflows and outflows expected to result from the outstanding mortgages in the Fund.” We obtained information on the capital resources of the Fund from documents used to prepare FHA’s audited financial statements. These capital resources were reported to be $14.3 billion. To estimate the net present value of future cash flows of the Fund, we constructed a cash flow model to estimate the five primary future outflows and inflows of cash through 2028 resulting from the books of business written from fiscal years 1975 through 1999. Cash flows out of the fund from payments associated with claims on foreclosed properties, refunds of up-front premiums on mortgages that are prepaid, and administrative expenses for management of the program. Cash flows into the fund from income from mortgagees' insurance premiums and from the net proceeds from the sale of foreclosed properties. To estimate the Fund's cash flow, we first forecasted, for active loans at the end of 1999, the dollar value of loans predicted to foreclose or prepay in any year through 2028. From those estimates, we derived estimates of the outstanding principal balances for the loans remaining active for each year in the forecast period. Our cash flow model used these estimates of foreclosure and prepayment dollars and outstanding principal balances to derive estimates of each of the primary cash flows. We forecasted future loan activity (foreclosures and prepayments) on the basis of the regression results described above and forecasts of the key economic and housing market variables made by Standard & Poor's DRI. Standard & Poor's DRI forecasts the median sales price of existing housing, by state and year, through fiscal year 2005. We assumed that after 2005 those prices would rise at 3 percent per year. In creating the borrower's equity variable, we used DRI forecasts of existing housing prices by state and subtracted 2 percentage points per year to adjust for improvements in the quality of housing over time and the depreciation of individual housing units. We also subtracted another 1 percentage point per year from the company’s forecasts, to be conservative. We made similar adjustments to our assumed value of median house price change for the years beyond the range of these forecasts. We used DRI forecasts of each state’s unemployment rate and assumed that rates from fiscal year 2026 on would equal the rates in 2025. We also used Standard & Poor's DRI forecasts of interest rates on 30-year mortgages and 1- and 10-year Treasury securities. Using the results of the econometric model, the cash flow model estimates cash flows for each policy year through the life of a mortgage. An important component of the model is converting all income and expense streams—regardless of the period in which they actually occurred—into 1999 present value dollars. We applied discount rates to match as closely as possible the rate of return FHA likely earned in the past or would earn in the future from its investment in Treasury securities. As an approximation of what FHA earned for each book of business, we used a rate of return comparable to the yield on 7-year Treasury securities prevailing when that book was written to discount all cash flows occurring in the first 7 years of that book's existence. We assumed that after 7 years, the Fund's investment was rolled over into new Treasury securities at the interest rate prevailing at that time and used that rate to discount cash flows to the rollover date. For rollover dates occurring in fiscal year 1999 and beyond, we used 6 percent as the new discount rate. As an example, cash flows associated with the fiscal year 1992 book of business and occurring from fiscal years 1992 through 1998 (i.e., the first 7 policy years) were discounted at the 7-year Treasury rate prevailing in fiscal year 1992. Cash flows associated with the fiscal year 1992 book of business but occurring in fiscal year 1999 and beyond are discounted at a rate of 6 percent. Our methodology for estimating each of the five principal cash flows is described below. Because FHA’s premium policy has changed over time, our calculations of premium income to the Fund change depending on the date of the mortgage’s origination. We describe all premium income, including up- front premiums, even though they play no role in estimating the future cash flows for the Fund at the end of fiscal year 1999. For loans originating from fiscal years 1975 through 1983, premiums equal the annual outstanding principal balance times 0.5 percent. For loans originating from fiscal years 1984 through June 30, 1991, premiums equal the original loan amount times the mortgage insurance premium. The mortgage insurance premium during this period was equal to 3.8 percent for 30-year mortgages and 2.4 percent for 15-year mortgages. Because there are no annual premiums for this group of loans, the future cash flows would include no premium income. For the purposes of this analysis, mortgages of other lengths of time are grouped with those they most closely approximate. Effective July 1, 1991, FHA added an annual premium of 0.5 percent of the outstanding principal balance to its up-front premiums. The number of years for which a borrower would be liable for making premium payments depended on the LTV ratio at the time of origination. (See tables 8 and 9.) For loans originating from July 1, 1991, through the time of our review, premiums equal the original loan amount times the respective up-front premium plus the product of the annual outstanding principal balance times the respective annual premium rate for as many years as annual premiums were required. Some loans that originated in the 1990s are streamline refinanced mortgages that are subject to different premium rates. Since streamline refinances do not require an appraisal, we decided that mortgages coded in FHA’s database with an LTV of 0 could reasonably be assumed to represent streamline refinance business. For streamline refinance mortgages that originated before July 1, 1991, we applied the premium rates from table 10. For all streamline refinance mortgages that originated after July 1, 1991, we applied the premium rates for non-streamline loans. That is, for up-front premium rates, we followed the 15-year or 30-year non-streamline premium schedule for loans of those maturities. For annual premium rates and number of years that annual premiums are paid, we applied the rates for loans with an LTV of less than 90 percent. Claim payments equal the outstanding principal balance on foreclosed mortgages times the acquisition cost ratio. We defined the acquisition cost ratio as being equal to the total amount paid by FHA to settle a claim and acquire a property (i.e., FHA’s "acquisition cost" as reported in its database) divided by the outstanding principal balance on the mortgage at the time of foreclosure. For the purposes of our analysis, we calculated an average acquisition cost ratio for each year’s book of business using actual data for fiscal years 1975 through 1999. Acquisition cost ratios generally decreased over time from a high of 1.51 for loans originating in 1975 to a low of 1.09 for loans originating in 1999. FHA’s net proceeds from the sale of foreclosed properties depend on both the lag rate—the proportion of a year that passes between the time of a foreclosure and the time the proceeds are received—and the loss rate—the proportion of the cost of the property acquired that is not recovered when the property is sold. These are calculated as follows: Net Proceeds = Lag rate x claim payments from previous period x (1 - loss rate) + (12-lag rate) x claim payments from the current period x (1 - loss rate). The lag, which is the number of months between the payment of a claim and the receipt of proceeds from the disposition of the property, varied as follows. Before 1995, the lag was 5.9 months; in 1995, 5.35 months; in 1996, 4.7 months; and in 1997, 5.26 months. For the years after 1997, we used a lag of 5.26 months. To calculate the lag rate for each period, we divided the lag by 12. We defined the loss rate as equal to FHA’s reported dollar loss after the disposition of property divided by the reported acquisition cost over the historical period. We determined a loss rate for each year per book of business for years 1 through 25. We used an auto-regressive model to forecast future loss rates. In addition to past values of loss rates, we used the origination year and policy year of the loan as independent variables in this model. Using the results of this model, we forecast loss rates over the period from fiscal years 2000 through 2023. For fiscal years 2024 through 2028, we used the estimated rate for 2023. Our loss rates averaged 37 percent over the forecast period. The amount of premium refunds paid by FHA depends on the policy year in which the mortgage is prepaid and the type of mortgage. For mortgages prepaid between October 1, 1983, and December 31, 1993, refunds were equal to the original loan amount times the refund rate. However, we converted these rates to express them as a percentage of the up-front premium. In 1993, FHA changed its refund policy to affect mortgages prepaid on or after January 1, 1994. For loans prepaying on or after January 1, 1994, refunds are equal to the up-front mortgage insurance premium times the refund rate. (See table 11.) Administrative expenses equal the outstanding principal balance times the administrative expense rate. The estimates of the administrative expense rates were 0.098 percent for the years before 1995, 0.113 percent for 1995, 0.097 percent for 1996, 0.102 percent for 1997, and 0.103 percent for 1998 and all future years. We conducted additional analyses to determine the sensitivity of our forecasts to the values of certain key variables. Because we found that projected losses from foreclosures are sensitive to the rates of unemployment and house price appreciation, we adjusted the forecasts of unemployment and price appreciation to provide a range of estimates of the Fund's economic value under alternative economic scenarios. Our starting points for forecasts of the key economic variables were forecasts made by Standard & Poor's DRI, as previously described. For our low case scenario, we made these forecasts more pessimistic by subtracting 2 percentage points per year from the forecasts of house price appreciation rates and adding 1 percentage point per year to the unemployment rate forecasts. For our high case scenario, we added 2 percentage points per year to our base case forecast of house price appreciation rates. Under these alternatives, we estimated economic values of about $13.6 billion and about $16.4 billion, respectively, for the low and high cases, compared with about $15.8 billion for our base case. These estimates correspond to estimates of the capital ratio of about 2.75 percent and 3.32 percent, respectively, for the low and high cases, compared with our base case estimate for the capital ratio of 3.20 percent. These estimates are shown in table 12. To assess the impact of our assumptions about the loss and discount rates on the economic value of the Fund, we operated our cash flow model with alternative values for these variables. We found that for the economic scenario of our base case, a 1-percentage-point increase in the forecasted loss rate resulted in a 0.7-percent decline in our estimate of the economic value of the Fund. Conversely, each percentage point decrease in the loss rate resulted in a 0.7-percent increase in our estimate of economic value. With respect to the discount rate, we found that for our base case economic scenario, a 1-percentage-point increase in the interest rate applied to most periods’ future cash flow resulted in a 0.3-percent increase in our estimate of economic value. Conversely, each percentage point decrease in the discount rate resulted in a 0.4-percent decrease in our estimate of economic value. This appendix describes the scenarios that we used to estimate the ability of the Fund to withstand adverse future economic conditions. Each scenario specifies values of key economic variables, which our models indicate are associated with mortgage claims and prepayments, during the forecast period. We used these values with the forecasting models presented in appendix II to estimate future mortgage claims and prepayments. We then used these forecasted values of claim and prepayment dollars in our cash flow model to estimate the economic value of the Fund and the capital ratio under each scenario. We developed two types of scenarios—historical and judgmental. We designed the historical scenarios to test the ability of the Fund to withstand adverse economic conditions similar to those that adversely affected the Fund in the 1980s and 1990s. Because some of these adverse conditions affected only certain regions, in some scenarios we expanded our analysis to include estimates of the capital ratio when the historical conditions were assumed to affect a larger share of FHA's business, including when they were assumed to affect the entire nation. In contrast, the judgmental scenarios that we developed are not based on historical experience. Instead, they represent conditions that we believe might place stress on the Fund. The key economic variables for which we forecast different values in the different scenarios are the rate of house price appreciation; the unemployment rate; and, in some instances, certain interest rates, especially the mortgage interest rate. In addition, we assumed that FHA's loss per claim (the loss rate), expressed as a percentage of the claim amount, was greater than the loss rate that we used in our base case analysis under expected economic conditions. We assumed that FHA would experience higher loss rates when foreclosures were substantially higher because of the difficulty of managing and disposing of a large number of properties at the same time. In addition, the demand for housing would be likely to fall during an economic downturn, making it more difficult to dispose of properties than in the base case. Three regional economic downturns and the 1981-82 national recession form the bases of our historical scenarios. Each regional downturn was associated with a regional decline in house prices. Declining house prices represent a particularly adverse condition for the Fund because of the strong negative relationship between borrowers' equity and the probability of defaults leading to foreclosures. The three regional economic downturns, and associated housing price declines, that we used were (1) the late 1980s' decline in the oil-producing states of the west south central region; (2) the late 1980s' and early 1990s' decline in New England; and (3) the early to mid-1990s' decline in the Pacific region, particularly in California. For each scenario that is based on a regional downturn, we assumed that for 4 years the rate of house price change for the part of the nation assumed to be affected by the downturn equaled the rate of house price change in the state in that region that we selected to represent the regional experience. We selected the experiences of (1) Louisiana, beginning in 1986, to represent the oil price downturn; (2) Massachusetts, beginning in 1988, to represent the New England economic downturn; and (3) California, beginning in 1991, to represent the California housing market downturn. Table 13 shows the median house prices for existing houses in these states during their economic downturns. In calculating homeowner's equity, we made the same adjustment to annual changes in median house prices that we did in our base case, as described in appendix II. Similarly, in our scenarios that are based on regional downturns, we assumed that unemployment rates would change in the affected area for 4 years by the same percentages as those rates changed in Louisiana; Massachusetts; and California, respectively. We developed six separate scenarios that are based on each regional downturn, by varying the scope (i.e., the number of states assumed to be affected) and timing of the adverse economic conditions in the forecast period. Specifically, we used three different scopes. In the narrowest scope, we assumed that only the particular region was affected. That is, for the scenario based on the downturn in the west south central region in the late 1980s, we assumed that during 4 years of the forecast period, all of the states in the west south central region experienced the same changes in key economic variables as Louisiana experienced from 1987 through 1990. We then expanded the scope by assuming that two regions in which FHA has a lot of borrowers, the west south central and Pacific regions, were affected. Finally, we then expanded the scope to the entire nation, by assuming that all states were affected. Regarding timing, for each scope we developed two scenarios, one in which the downturn began in 2000 and one in which it began in 2001. Although we know that an economic downturn did not begin in 2000, we developed scenarios starting then to test the ability of the Fund to withstand an economic downturn that occurs when the portfolio contains many recent loans. Scenarios in which the downturn does not begin until 2001 would be expected to be less adverse because most of the large number of borrowers who took out mortgages in 1998 and 1999 would have seen substantial price appreciation in 2000, thereby reducing the likelihood of default. We developed two historical scenarios that are based on the 1981-82 recession and subsequent recovery. In those scenarios, we assumed that in each state, the rates of change in house price appreciation and unemployment for 5 years during the forecast period are the same as they were from 1981 through 1985. In one scenario, we assumed that these adverse conditions replicating 1981 through 1985 began in 2000; in the other scenario, we assumed that they began in 2001. Under these scenarios, some states fared better than in the base case scenario. Because it will be more difficult to manage and dispose of foreclosed properties during an economic downturn, we increased the loss rates on the proportion of mortgages affected by a given scenario during the years the scenario runs. We assumed that losses on affected foreclosed properties would rise to 45 percent of the property’s value. Without this, loss rates average about 37 percent. Our estimates of the economic value of the Fund and the capital ratio for the historical scenarios are presented in table 14. Judgmental Scenarios We developed several judgmental scenarios to test the ability of the Fund to withstand various types of economic conditions that might adversely affect the Fund without regard to their relationship to historical experience. In one scenario, we assumed that median existing house prices declined by 5 percent per year for 3 consecutive years—an extremely steep rate of decline—and that unemployment increased compared with the base case, with both changes beginning in 2001. Specifically, we increased the unemployment rates in each state from forecasted levels by 2 percentage points in 2001; 5 percentage points in 2002, 2003, and 2004; and 2 percentage points in 2005. In a second scenario, we allowed the mortgage interest rate to decline in 2000—by 2 percentage points from its forecasted level—and then to return to forecasted levels. We did this to precipitate a wave of refinancing. We also assumed declining house prices and rising unemployment beginning in 2001, as in the previous judgmental scenario. We used this scenario to test what might happen if premium income turns out to be substantially less than expected and premium refunds substantially more than expected because of rapid prepayment of loans, most of which would not default. In our third scenario, we added 1 percentage point to the base case forecasts of the mortgage interest rate, and 1- and 10-year Treasury rates for the year 2000, 3 percentage points to the forecasts of these interest rates between 2001 and 2003, and 1 percentage point in 2004. We used this scenario to test what might happen if interest rates were to rise more than anticipated. In a fourth scenario, we used the same rising interest rates as in the third scenario and also added one percentage point to the forecasts of median existing house prices over that period. Our estimates of the economic value of the Fund and the capital ratio for the judgmental scenarios are also presented in table 14. In another type of judgmental scenario, we did not forecast the economic variables and then use the forecasted claims and prepayments from our econometric model, as we did with all of our other scenarios, both judgmental and historical. Instead, because none of our other scenarios produced foreclosure rates nearly as high as FHA experienced in the 1980s, we developed two scenarios in which we directly assumed higher foreclosure rates. First, we assumed that in 2000 through 2004, the proportion of loans insured in each region experienced for the 1989 through 1999 books of business the same foreclosure rates that the 1975 through 1985 books of business experienced in that region in 1986 through 1990. This scenario produced a capital ratio of 0.92 percent. Second, we assumed that in 2000 through 2004, varying proportions of FHA’s portfolio experienced for the 1989 through 1999 books of business the same foreclosure rates that the 1975 through 1985 books of business experienced in the west south central states in 1986 through 1990. Because streamline refinanced mortgages and ARMs did not exist or were minimal parts of FHA’s portfolio from 1975 through 1985, foreclosure rates were not adjusted for these types of loans. For the other products—30-year fixed- rate, 15-year fixed-rate, investor, and graduated payment mortgages— foreclosure rates were adjusted accordingly for each type of product. For this scenario, we found that if 36.5 percent of FHA-insured mortgages experienced these high default rates, the estimated capital ratio for fiscal year 1999 would fall by 2 percentage points, and if about 55 percent of FHA's portfolio experienced these conditions, the economic value would be depleted. In addition to those named above, Nancy Barry, Elaine Boudreau, Steve Brown, Jay Cherlow, Kimberly Granger, DuEwa Kamara, John McDonough, Salvatore F. Sorebllo Jr., Mark Stover, and Patrick Valentine made key contributions to this report. Financial Health of the Federal Housing Administration's Mutual Mortgage Insurance Fund (GAO/T-RCED-00-287, Sept. 12, 2000). Level of Annual Premiums That Place a Ceiling on Distributions to FHA Policyholders (GAO/RCED-00-280R, Sept. 8, 2000). Single-Family Housing: Stronger Measures Needed to Encourage Better Performance by Management and Marketing Contractors (GAO/T-RCED- 00-180, May 16, 2000, and GAO/RCED-00-117, May 12, 2000). Single-Family Housing: Stronger Oversight of FHA Lenders Could Reduce HUD's Insurance Risk (GAO/RCED-00-112, Apr. 28, 2000). Homeownership: Results of and Challenges Faced by FHA's Single-Family Mortgage Insurance Program (GAO/T-RCED-99-133, Mar. 25, 1999). Homeownership: Achievements of and Challenges Faced by FHA's Single- Family Mortgage Insurance Program (GAO/T-RCED-98-217, June 2, 1998). Homeownership: Management Challenges Facing FHA's Single-Family Housing Operations (GAO/T-RCED-98-121, Apr. 1, 1998). Homeownership: Mixed Results and High Costs Raise Concerns about HUD's Mortgage Assignment Program (GAO/RCED-96-2, Oct. 18, 1995). Homeownership: Information on Single Family Loans Sold by HUD (GAO/RCED-99-145, June 15, 1999). Homeownership: Information on Foreclosed FHA-Insured Loans and HUD- Owned Properties in Six Cities (GAO/RCED-98-2, Oct. 8, 1997). Homeownership: Potential Effects of Reducing FHA's Insurance Coverage for Home Mortgages (GAO/RCED-97-93, May 1, 1997). Homeownership: FHA's Role in Helping People Obtain Home Mortgages (GAO/RCED-96-123, Aug. 13, 1996). Mortgage Financing: FHA Has Achieved Its Home Mortgage Capital Reserve Target (GAO/RCED-96-50, Apr. 12, 1996). The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
The Mutual Mortgage Insurance Fund has maintained an economic value of at least two percent of the Fund's insurance-in-force, as required by law. GAO's and the Department of Housing and Urban Development's (HUD) analysis show that the Fund had an economic value of $15.8 billion (3.20 percent) and $16.6 billion (3.66 percent), respectively. Given the economic value of the Fund and the state of the economy at the end of fiscal year 1999, a two-percent capital ratio appears sufficient to withstand moderately severe economic downturns that could lead to worse-than-expected loan performance. However, under more severe economic conditions, the economic value of two percent of insurance-in-force would not be adequate. Because of the uncertainty and professional judgment associated with this type of economic analysis, GAO cautions against relying on one estimate or even a group of estimates to determine the adequacy of the Fund's reserves over the longer term. HUD could exercise several options under current legislative authority to reduce the capital ratio for the Fund. It is difficult, however, to reliably measure the impact of policy changes on the Fund's capital ratio and Federal Housing Administration borrowers without using tools designed to estimate the multiple impacts that policy changes often have. Nonetheless, any option that reduces the Fund's reserve, if not accompanied by a similar reduction in other government spending, would result in a budget surplus reduction or a deficit increase.
Each year, OMB and federal agencies work together to determine how much the government plans to spend on IT investments and how these funds are to be allocated. According to the President’s Budget for Fiscal Year 2011, the total planned spending on IT in fiscal year 2011 is an estimated $79.4 billion, a 1.2 percent increase from the fiscal year 2010 budget level of $78.4 billion. OMB plays a key role in helping federal agencies manage their investments by working with them to better plan, justify, and determine how much they need to spend on projects and how to manage approved projects. To assist agencies in managing their investments, Congress enacted the Clinger-Cohen Act of 1996, which requires OMB to establish processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by federal agencies and report to Congress on the net program performance benefits achieved as a result of these investments. Further, the act places responsibility for managing investments with the heads of agencies and establishes chief information officers (CIO) to advise and assist agency heads in carrying out this responsibility. Another key law is the E-Government Act of 2002, which requires OMB to report annually to Congress on the status of e- government. In these reports, referred to as Implementation of the E- Government Act reports, OMB is to describe the administration’s use of e- government principles to improve government performance and the delivery of information and services to the public. To help carry out its oversight role, in 2003, OMB established the Management Watch List, which included mission-critical projects that needed to improve performance measures, project management, IT security, or overall justification for inclusion in the federal budget. Further, in August 2005, OMB established a High-Risk List, which consisted of projects identified by federal agencies, with the assistance of OMB, as requiring special attention from oversight authorities and the highest levels of agency management. Over the past several years, we have reported and testified on OMB’s initiatives to highlight troubled IT projects, justify investments, and use project management tools. We have made multiple recommendations to OMB and federal agencies to improve these initiatives to further enhance the oversight and transparency of federal projects. Among other things, we recommended that OMB develop a central list of projects and their deficiencies and analyze that list to develop governmentwide and agency assessments of the progress and risks of the investments, identifying opportunities for continued improvement. In addition, in 2006 we also recommended that OMB develop a single aggregate list of high-risk projects and their deficiencies and use that list to report to Congress on progress made in correcting high-risk problems. As a result, OMB started publicly releasing aggregate data on its Management Watch List and disclosing the projects’ deficiencies. Furthermore, OMB issued governmentwide and agency assessments of the projects on the Management Watch List and identified risks and opportunities for improvement, including risk management and security. More recently, to further improve the transparency and oversight of agencies’ IT investments, and to address data quality issues, in June 2009, OMB publicly deployed a Web site, known as the IT Dashboard, which replaced the Management Watch List and High-Risk List. It displays federal agencies’ cost, schedule, and performance data for the approximately 800 major federal IT investments at 27 federal agencies. According to OMB, these data are intended to provide a near real-time perspective on the performance of these investments, as well as a historical perspective. Further, the public display of these data is intended to allow OMB; other oversight bodies, including Congress; and the general public to hold the government agencies accountable for results and progress. The Dashboard was initially deployed in June 2009 based on each agency’s Exhibit 53 and Exhibit 300 submissions. After the initial population of data, agency CIOs have been responsible for updating cost, schedule, and performance fields on a monthly basis, which is a major improvement from the quarterly reporting cycle OMB previously used for the Management Watch List and High-Risk List. For each major investment, the Dashboard provides performance ratings on cost and schedule, a CIO evaluation, and an overall rating, which is based on the cost, schedule, and CIO ratings. As of July 2010, the cost rating was determined by a formula that calculates the amount by which an investment’s total actual costs deviate from the total planned costs. Similarly, the schedule rating is the variance between the investment’s planned and actual progress to date. Figure 1 displays the rating scale and associated categories for cost and schedule variations. Each major investment on the Dashboard also includes a rating determined by the agency CIO, which is based on his or her evaluation of the performance of each investment. The rating is expected to take into consideration the following criteria: risk management, requirements management, contractor oversight, historical performance, and human capital. This rating is to be updated when new information becomes available that would affect the assessment of a given investment. Last, the Dashboard calculates an overall rating for each major investment. This overall rating is an average of the cost, schedule, and CIO ratings, with each representing one-third of the overall rating. However, when the CIO’s rating is lower than both the cost and schedule ratings, the CIO’s rating will be the overall rating. Figure 2 shows the overall performance ratings of the 805 major investments on the Dashboard as of March 2011. To better manage IT investments, OMB issued guidance directing agencies to develop comprehensive policies to ensure that their major IT investments and high-risk development projects use earned value management to manage their investments. Earned value management is a technique that integrates the technical, cost, and schedule parameters of a development contract and measures progress against them. During the planning phase, a performance measurement baseline is developed by assigning and scheduling budget resources for defined work. As work is performed and measured against the baseline, the corresponding bud value is “earned.” Using this earned value metric, cost and schedule variances, as well as cost a nd time to complete estimates, can be determined and analyzed. Without knowing the planned cost of completed work and work in progress (i.e., the earned value), it is difficult to determine a program’s true status. Earned value allows for this key information, which provides an objective view of program status and is necessary for understandin health of a program. As a result, earned value management can alert program managers to potential problems sooner than using expenditures alone, thereby reducing the chance and magnitude of cost overruns and schedule slippages. Moreover, earned value management directly supp the institutionalization of key processes for acquiring and developing systems and the ability to effectively manage investments—areas that are often found to be inadequate on the basis of our assessments of major IT investments. In July 2010, we reported that the cost and schedule ratings on OMB’s Dashboard were not always accurate for selected agencies. Specifically, we found that several selected investments had notable discrepancies in their cost or schedule ratings, the cost and schedule ratings did not takeinto consideration current performance, and the number of milestones (activities) reported by agencies varied widely. We made a number of recommendations to OMB to better ensure that the Dashboard prov meaningful ratings and accurate investment data. In particular, we recommended that OMB report on its planned Dashboard changes to improve the accuracy of performance information and provide guidance agencies that standardizes activity reporting. OMB agreed with the two recommendations and reported it had initiated work to address them. Since our last report, OMB has initiated multiple efforts to increase the Dashboard’s value as a management and oversight tool, and has used data in the Dashboard to improve the management of federal IT investments. Specifically, OMB is focusing its efforts in four main areas: streamlining key OMB investment reporting tools, eliminating manual monthly submissions, coordinating with agencies to improve data, and improving the user interface. OMB’s plan to reform federal IT management commits OMB to streamlining two of the Dashboard’s sources of information—specifically, the OMB Exhibits 53 and 300. OMB has committed, by May 2011, to reconstruct the exhibits around distinct data elements that drive value for agencies and provide the information necessary for meaningful oversight. OMB anticipates that these changes will also alleviate the reporting burden and increase data accuracy, and that the revised exhibits will serve as its authoritative management tools. According to OMB officials, the Dashboard no longer accepts manual data submissions. Instead, the Dashboard allows only system-to-system submissions. Officials explained that this update allows the Dashboard to reject incomplete submissions and those that do not meet the Dashboard’s data validation rules. By eliminating direct manual submissions, this effort is expected to improve the reliability of the data shown on the Dashboard. Further, OMB officials stated that they work to improve the Dashboard through routine interactions with agencies and IT portfolio management tool vendors, training courses, working groups, and data quality letters to agencies. Specifically, OMB officials stated that they held 58 TechStat reviews (discussed later in this report), hosted four online training sessions (recordings of which OMB officials stated are also available online), collaborated with several Dashboard working groups, and sent letters to agency CIOs identifying specific data quality issues on the Dashboard that their agencies could improve. Further, OMB officials explained that in December 2010, OMB analysts informed agency CIOs about specific data quality issues and provided analyses of agency data, a comparison of agency Dashboard performance with that of the rest of the government, and expected remedial actions. OMB anticipates these efforts will increase the Dashboard’s data reliability by ensuring that the agencies are aware of and are working to address issues. Finally, OMB continues to improve the Dashboard’s user interface. For instance, in November 2010, OMB updated the Dashboard to provide new views of historical data and rating changes and provide new functionality allowing agencies to make corrections to activities and performance metrics (conforming to rebaselining guidance). Officials also described a planned future update, which is intended to contain updated budget data, display corrections and changes made to activities, and reflect increased validation of agency-submitted data. OMB anticipates these efforts will increase the transparency and reliability of investment information on the Dashboard by providing agencies and users additional ways to view investment information and by improving validation of submitted data. Additionally, OMB uses the Dashboard to improve the management of IT investments. Specifically, OMB analysts are using the Dashboard’s investment trend data to track changes and identify issues with investments’ performance in a timely manner. OMB analysts also use the Dashboard to identify data quality issues and drive improvements to the data. The Federal CIO stated that the Dashboard has greatly improved oversight capabilities compared with those of previously used mechanisms, such as the annual capital asset plan and business case (Exhibit 300) process. Additionally, according to OMB officials, the Dashboard is one of the key sources of information that OMB analysts use to identify IT investments that are experiencing performance problems and to select them for a TechStat session—a review of selected IT investments between OMB and agency leadership that is led by the Federal CIO. As of December 2010, OMB officials stated that 58 TechStat sessions have been held with federal agencies. According to OMB, these sessions have enabled the government to improve or terminate IT investments that are experiencing performance problems. Information from the TechStat sessions and the Dashboard was used by OMB to identify, halt, and review all federal financial IT systems modernization projects. Furthermore, according to OMB, these sessions and other OMB management reviews have resulted in a $3 billion reduction in life-cycle costs, as of December 2010. OMB officials stated that, as of December 2010, 11 investments have been reduced in scope and 4 have been terminated as a result of these sessions. For example, The TechStat on the Department of Housing and Urban Development’s Transformation Initiative investment found that the department lacked the skills and resources necessary and would not be positioned to succeed. As a result, the department agreed to reduce the number of projects from 29 to 7 and to limit fiscal year 2010 funds for these 7 priority projects to $85.7 million (from the original $138 million). The TechStat on the National Archives and Records Administration’s Electronic Records Archives investment resulted in six corrective actions, including halting fiscal year 2012 development funding pending the completion of a strategic plan. According to OMB officials, OMB and agency CIOs also used the Dashboard data and TechStat sessions, in addition to other forms of research (such as reviewing program documentation, news articles, and inspector general reports), to identify 26 high-risk IT projects and, in turn, coordinate with agencies to develop corrective actions for these projects at TechStat sessions. For example, the Department of the Interior is to establish incremental deliverables for its Incident Management Analysis and Reporting System, which will accelerate delivery of services that will help 6,000 law enforcement officers protect the nation’s natural resources and cultural monuments. While the efforts previously described are important steps to improving the quality of the information on the Dashboard, cost and schedule performance data inaccuracies remain. The Dashboard’s cost and schedule ratings were not always reflective of the true performance for selected investments from the five agencies in our review. More specifically, while the Dashboard is intended to present near real-time performance, the ratings did not always reflect the current performance of these investments. Dashboard rating inaccuracies were the result of weaknesses in agency practices, such as the Dashboard not reflecting baseline changes and the reporting of erroneous data, as well as limitations of the Dashboard’s calculations. Until the agencies submit complete, reliable, and timely data to the Dashboard and OMB revises its Dashboard calculations, performance ratings will continue to be inaccurate and may not reflect current program performance. Most of the Dashboard’s cost ratings of the nine selected investments did not match the results of our analyses over a 3-month period. Specifically, four investments had inaccurate ratings for 2 or more months, and two were inaccurate for 1 month, while three investments were accurately depicted for all 3 months. For example, Intelligent Disability’s cost performance was rated “red” on the Dashboard for July 2010 and “green” for August 2010, whereas our analysis showed its current cost performance was “yellow” for those months. Further, Medical Legacy’s cost ratings were “red” on the Dashboard for June through August 2010, while the department’s internal rating showed that the cost performance for 105 of the 107 projects that constitute the investment was “green” in August 2010; similar ratings were also seen for June and July 2010. Overall, the Dashboard’s cost ratings generally showed poorer performance than our assessments. Figure 3 shows the comparison of the selected investments’ Dashboard cost ratings with GAO’s ratings based on analysis of agency data for the months of June 2010 through August 2010. Regarding schedule, most of the Dashboard’s ratings of the nine selected investments did not match the results of our analyses over a 3-month period. Specifically, seven investments had inaccurate ratings for 2 or more months, and two were inaccurate for 1 month. For example, Automatic Dependent Surveillance-Broadcast’s schedule performance was rated “green” on the Dashboard in July 2010, but our analysis showed its current performance was “yellow” that month. Additionally, the “green” schedule ratings for En Route Automation Modernization did not represent how this program is actually performing. Specifically, we recently reported that the program is experiencing significant schedule delays, and the CIO evaluation of the program on the Dashboard has indicated schedule delays since February 2010. As with the cost ratings, the Dashboard’s schedule ratings generally showed poorer performance than our assessments. Figure 4 shows the comparison of the selected investments’ Dashboard schedule ratings with GAO’s ratings based on analysis of agency data for the months of June 2010 through August 2010. OMB guidance, as of June 2010, states that agencies are responsible for maintaining consistency between the data in their internal systems and the data on the Dashboard. Furthermore, the guidance states that agency CIOs should update their evaluation on the Dashboard as soon as new information becomes available that affects the assessment of a given investment. According to our assessment of the nine selected investments, agencies did not always follow this guidance. In particular, there were four primary weaknesses in agency practices that resulted in inaccurate cost and schedule ratings on the Dashboard: the investment baseline on the Dashboard was not reflective of the investment’s actual baseline, agencies did not report data to the Dashboard, agencies reported erroneous data, and unreliable earned value data were reported to the Dashboard. In addition, two limitations of OMB’s Dashboard calculations contributed to ratings inaccuracies: a lack of emphasis on current performance and an understatement of schedule variance. Table 1 shows the causes of inaccurate ratings for the selected investments. Inconsistent program baseline: Three of the selected investments reported baselines on the Dashboard that did not match the actual baselines tracked by the agencies. Agency officials responsible for each of these investments acknowledged this issue. For example, according to Modernized e-File officials, the investment was in the process of a rebaseline in June 2010; thus, officials were unable to update the baseline on the Dashboard until July 2010. For another investment—HealtheVet Core—officials stated that it was stopped in August, and thus the HealtheVet Core baseline on the Dashboard is incorrect. As such, the CIO investment evaluation should have been updated to reflect that the investment was stopped. In June 2010, OMB issued new guidance on rebaselining, which stated that agencies should update investment baselines on the Dashboard within 30 days of internal approval of a baseline change and that this update will be considered notification to OMB. However, agencies still must go through their internal processes to approve a new baseline, and during this process the baseline on the Dashboard will be inaccurate. As such, investment CIO ratings should disclose that performance data on the Dashboard are unreliable because of baseline changes. However, the CIO evaluation ratings for these investments did not include such information. Without proper disclosure of pending baseline changes and resulting data reliability weaknesses, OMB and other external oversight groups will not have the appropriate information to make informed decisions about these investments. Missing data submissions: Three investments did not upload complete and timely data submissions to the Dashboard. For example, DHS officials did not submit data to the Dashboard for the C4ISR investment from June through August 2010. According to DHS officials, C4ISR investment officials did not provide data for DHS to upload for these months. Further compounding the performance rating issues of this investment is that in March 2010, inaccurate data were submitted for nine of its activities; these data were not corrected until September 2010. Until officials submit complete, accurate, and timely data to the Dashboard, performance ratings may continue to be inaccurate. Erroneous data submissions: Seven investments reported erroneous data to the Dashboard. For example, SSA submitted start dates for Intelligent Disability and Disability Case Processing System activities that had not actually started yet. SSA officials stated that, because of SSA’s internal processes, their start dates always correspond to the beginning of the fiscal year. In addition, according to a Treasury official, Internal Revenue Service officials for the Modernized e-File investment provided inaccurate data for the investment’s “actual percent complete” fields for some activities. Until officials submit accurate data to the Dashboard, performance ratings may continue to be inaccurate. Unreliable source data: Treasury’s Payment Application Modernization investment used unreliable earned value data as the sole source of data on the Dashboard. As such, this raises questions about the accuracy of the performance ratings reported on the Dashboard. Investment officials stated that they have taken steps to address weaknesses with the earned value management system and are currently evaluating other adjustments to investment management processes. However, without proper disclosure about data reliability in the CIO assessment, OMB and other external oversight groups will not have the appropriate information to make informed decisions about this investment. Additionally, two limitations in the Dashboard method for calculating ratings contributed to inaccuracies: Current performance calculation: The Dashboard is intended to represent near real-time performance information on all major IT investments, as previously discussed. To OMB’s credit, in July 2010, it updated the Dashboard’s cost and schedule calculations to include both ongoing and completed activities in order to accomplish this. However, the performance of ongoing activities is combined with the performance of completed activities, which can mask recent performance. As such, the cost and schedule performance ratings on the Dashboard may not always reflect current performance. Until OMB updates the Dashboard’s cost and schedule calculations to focus on current performance, the performance ratings may not reflect performance problems that the investments are presently facing, and OMB and agencies are thus missing an opportunity to identify solutions to such problems. Schedule variance calculation: Another contributing factor to certain schedule inaccuracies is that OMB’s schedule calculation for in-progress activities understates the schedule variance for activities that are overdue. Specifically, OMB’s schedule calculation does not recognize the full variance of an overdue activity until it has actually completed. For example, as of September 13, 2010, the Dashboard reported a 21-day schedule variance for an En Route Automation Modernization activity that was actually 256 days overdue. Until OMB updates its in-progress schedule calculation to be more reflective of the actual schedule variance of ongoing activities, schedule ratings for these activities may be understated. The Dashboard has enhanced OMB’s and agency CIOs’ oversight of federal IT investments. Among other things, performance data from the Dashboard are being used to identify poorly performing investments for executive leadership review sessions. Since the establishment of the Dashboard, OMB has worked to continuously refine it, with multiple planned improvement efforts under way for improving the data quality and Dashboard usability. However, the quality of the agency data reported to the Dashboard continues to be a challenge. Specifically, the cost and schedule ratings on the Dashboard were not always accurate in depicting current program performance for most of the selected investments, which is counter to OMB’s goal to report near real-time performance. The Dashboard rating inaccuracies were due, in part, to weaknesses in agencies’ practices and limitations in OMB’s calculations. More specifically, the agency practices—including the inconsistency between Dashboard and program baselines, reporting of erroneous data, and unreliable source data—and OMB’s formulas to track current performance have collectively impaired data quality. Until agencies provide more reliable data and OMB improves the calculations of the ratings on the Dashboard, the accuracy of the ratings will continue to be in question and the ratings may not reflect current program performance. To better ensure that the Dashboard provides accurate cost and schedule performance ratings, we are making eleven recommendations to the heads of each of the five selected agencies. Specifically, we are recommending that: The Secretary of the Department of Homeland Security direct the CIO to ensure that investment data submissions include complete and accurate investment information for all required fields; comply with OMB’s guidance on updating the CIO rating as soon as new information becomes available that affects the assessment of a given investment, including when an investment is in the process of a rebaseline; and work with C4ISR officials to comply with OMB’s guidance on updating investment cost and schedule data on the Dashboard at least monthly. The Secretary of the Department of Transportation direct the CIO to work with Automatic Dependent Surveillance-Broadcast officials to comply with OMB’s guidance on updating investment cost and schedule data on the Dashboard at least monthly. The Secretary of the Department of the Treasury direct the CIO to comply with OMB’s guidance on updating the CIO rating as soon as new information becomes available that affects the assessment of a given investment, including when an investment is in the process of a rebaseline; work with Modernized e-File officials to report accurate actual percent complete data for each of the investment’s activities; and work with Payment Application Modernization officials to disclose the extent of this investment’s data reliability issues in the CIO rating assessment on the Dashboard. The Secretary of the Department of Veterans Affairs direct the CIO to comply with OMB’s guidance on updating the CIO rating as soon as new information becomes available that affects the assessment of a given investment, including when an investment is in the process of a rebaseline; work with Medical Legacy officials to comply with OMB’s guidance on updating investment cost and schedule data on the Dashboard at least monthly; and ensure Medical Legacy investment data submitted to the Dashboard are consistent with the investment’s internal performance information. The Commissioner of the Social Security Administration direct the CIO to ensure that data submissions to the Dashboard include accurate investment information for all required fields. In addition, to better ensure that the Dashboard provides meaningful ratings and reliable investment data, we are recommending that the Director of OMB direct the Federal CIO to take the following two actions: develop cost and schedule rating calculations that better reflect current update the Dashboard’s schedule calculation for in-progress activities to more accurately represent the variance of ongoing, overdue activities. We provided a draft of our report to the five agencies in our review and to OMB. In commenting on the draft, four agencies generally concurred with our recommendations. One agency, the Department of Transportation, agreed to consider our recommendation. OMB agreed with one of our recommendations and disagreed with the other. In addition, OMB raised concerns about the methodology used in our report. Agencies also provided technical comments, which we incorporated as appropriate. Each agency’s comments are discussed in more detail below. In e-mail comments on a draft of the report, DHS’s Departmental Audit Liaison stated that the department concurred with our recommendations. In e-mail comments, DOT’s Director of Audit Relations stated that DOT would consider our recommendation; however, he also stated that the department disagreed with the way its investments were portrayed in the draft. Specifically, department officials stated that our assessment was not reasonable because our methodology only incorporated the most recent 6 months of performance rather than using cumulative investment performance. As discussed in this report, combining the performance of ongoing and completed activities can mask recent performance. As such, we maintain that our methodology is a reasonable means of deriving near real-time performance, which the Dashboard is intended to represent. In oral comments, Treasury’s Chief Architect stated that the department generally concurred with our recommendations and added that the department would work to update its Dashboard ratings for the two selected investments. In written comments, VA’s Chief of Staff stated that the department generally concurred with our recommendations and agreed with our conclusions. Further, he outlined the department’s planned process improvements to address the weaknesses identified in this report. VA’s comments are reprinted in appendix III. In written comments, SSA’s Deputy Chief of Staff stated that the Administration agreed with our recommendation and had taken corrective actions intended to prevent future data quality errors. SSA’s comments are reprinted in appendix IV. Officials from OMB’s Office of E-Government & Information Technology provided the following oral comments on the draft: OMB officials agreed with our recommendation to update the Dashboard’s schedule calculation for in-progress activities to more accurately represent the variance of ongoing, overdue activities. These officials stated that the agency has long-term plans to update the Dashboard’s calculations, which they believe will provide a solution to the concern identified in this report. OMB officials disagreed with our recommendation to develop cost and schedule rating calculations that better reflect current investment performance. According to OMB, real-time performance is always reflected in the ratings since current investment performance data are uploaded to the Dashboard on a monthly basis. Regarding OMB’s comments, our point is not that performance data on the Dashboard are infrequently updated, but that the use of historical data going back to an investment’s inception can mask more recent performance. For this reason, current investment performance may not always be as apparent as it should be, as this report has shown. Until the agency places less emphasis on the historical data factored into the Dashboard’s calculations, it will be passing up an opportunity to more efficiently and effectively identify and oversee investments that either currently are or soon will be experiencing problems. OMB officials also described the agency’s plans for enhancing Dashboard data quality and performance calculations. According to OMB, plans were developed in February 2011 with stakeholders from other agencies to standardize the reporting structure for investment activities. Further, OMB officials said that their plans also call for the Dashboard’s performance calculations to be updated to more accurately reflect activities that are delayed. In doing so, OMB stated that agencies will be expected to report new data elements associated with investment activities. Additionally, OMB officials noted that new agency requirements associated with these changes will be included in key OMB guidance (Circular A-11) no later than September 2011. OMB officials also raised two concerns regarding our methodology. Specifically, OMB stated that our reliance on earned value data as the primary source for determining investment performance was questionable. These officials stated that, on the basis of their experience collecting earned value data, the availability and quality of these data vary significantly across agencies. As such, according to these officials, OMB developed its Dashboard cost and schedule calculations to avoid relying on earned value data. We acknowledge that the quality of earned value data can vary. As such, we took steps to ensure that the data we used were reliable enough to evaluate the ratings on the Dashboard, and discounted the earned value data of one of the selected investments after determining its data were insufficient for our needs. While we are not critical of OMB’s decision to develop its own method for calculating performance ratings, we maintain that our use of earned value data is sound. Furthermore, earned value data were not the only source for our analysis; we also based our findings on other program management documentation, such as inspector general reports and internal performance management system performance ratings, as discussed in appendix I. OMB also noted that, because we used earned value data to determine investment performance, our ratings were not comparable to the ratings on the Dashboard. Specifically, OMB officials said that the Dashboard requires reporting of all activities under an investment, including government resources or operations and maintenance activities. OMB further said that this is more comprehensive than earned value data, which only account for contractor-led development activities. We acknowledge and support the Dashboard’s requirement for a comprehensive accounting of investment performance. Further, we agree that earned value data generally only cover development work associated with the investments (thus excluding other types of work, such as planning and operations and maintenance). For this reason, as part of our methodology, we specifically selected investments for which the majority of the work being performed was development work. We did this because earned value management is a proven technique for providing objective quantitative data on program performance, and alternative approaches do not always provide a comparable substitute for such data. Additionally, as discussed above, we did not base our analysis solely upon earned value data, but evaluated other available program performance documentation to ensure that we captured performance for the entire investment. As such, we maintain that the use of earned value data (among other sources) and the comparison of selected investments’ Dashboard ratings with our analyses resulted in a fair assessment. We are sending copies of this report to interested congressional committees; the Secretaries of the Departments of Homeland Security, Transportation, the Treasury, and Veterans Affairs, as well as the Commissioner of the Social Security Administration; and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this report, please contact me at (202) 512-9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives were to (1) determine what efforts the Office of Management and Budget (OMB) has under way to improve the Dashboard and the ways in which it is using data from the Dashboard to improve information technology (IT) management and (2) examine the accuracy of the cost and schedule performance ratings on OMB’s Dashboard. To address the first objective, we examined related OMB guidance and documentation to determine the ongoing and planned improvements OMB has made to the Dashboard and discussed these improvements with OMB officials. Additionally, we evaluated OMB documentation of current and planned efforts to oversee and improve the management of IT investments and the Dashboard, such as memos detailing the results of investment management review sessions, and interviewed OMB officials regarding these efforts. To address the second objective, we selected 5 agencies and 10 investments to review. To select these agencies and investments, we first identified the 12 agencies with the largest IT budgets as reported in OMB’s fiscal year 2011 Exhibit 53. This list of agencies was narrowed down to 10 because 2 agencies did not have enough investments that met our criteria (as defined in the following text). We then excluded agencies that were assessed in our previous review of the Dashboard. As a result, we selected the Departments of Homeland Security (DHS), Transportation (DOT), the Treasury, and Veterans Affairs (VA), and the Social Security Administration (SSA). In selecting the specific investments at each agency, we identified the 10 largest investments that, according to the fiscal year 2011 budget, were spending more than half of their budget on IT development, modernization, and enhancement work. To narrow this list, we excluded investments whose four different Dashboard ratings (overall, cost, schedule, and chief information officer) were generally “red” because they were likely already receiving significant scrutiny. We then selected 2 investments per agency. As part of this selection process, we considered the following: investments that use earned value management techniques to monitor cost and schedule performance, and investments whose four different Dashboard ratings appeared to be in conflict (e.g., cost and schedule ratings were “green,” yet the overall rating was “red”). The 10 final investments were DHS’s U.S. Citizenship and Immigration Service (USCIS)-Transformation program and U.S. Coast Guard-Command, Control, Communications, Computers, Intelligence, Surveillance & Reconnaissance (C4ISR) program; DOT’s Automatic Dependent Surveillance-Broadcast system and En Route Automation Modernization system; Treasury’s Modernized e-File system and Payment Application Modernization investment; VA’s HealtheVet Core and Medical Legacy investments; and SSA’s Disability Case Processing System and Intelligent Disability program. The 5 agencies account for 22 percent of the planned IT spending for fiscal year 2011. The 10 investments selected for case study represent about $1.27 billion in total planned spending in fiscal year 2011. To assess the accuracy of the cost and schedule performance ratings on the Dashboard, we evaluated earned value data of 7 of the selected investments to determine their current cost and schedule performances and compared them with the performance ratings on the Dashboard. The investment earned value data were contained in contractor earned value management performance reports obtained from the programs. To perform the current performance analysis, we averaged the cost and schedule variances over the last 6 months and compared the averages with the performance ratings on the Dashboard. To assess the accuracy of the cost data, we compared them with data from other available supporting program documents, including program management reports and inspector general reports; electronically tested the data to identify obvious problems with completeness or accuracy; and interviewed agency and program officials about the earned value management systems. For the purposes of this report, we determined that the cost data for these 7 investments were sufficiently reliable. For the 3 remaining investments, we did not use earned value data because the investments either did not measure performance using earned value management or the earned value data were determined to be insufficiently reliable. Instead, we used other program documentation, such as inspector general reports and internal performance management system performance ratings, to assess the accuracy of the cost and schedule ratings on the Dashboard. We did not test the adequacy of the agency or contractor cost-accounting systems. Our evaluation of these cost data was based on what we were told by each agency and the information it could provide. We also interviewed officials from OMB and the selected agencies and reviewed OMB guidance to obtain additional information on OMB’s and agencies’ efforts to ensure the accuracy of the data used to rate investment performance on the Dashboard. We used the information provided by OMB and agency officials to identify the factors contributing to inaccurate cost and schedule performance ratings on the Dashboard. We conducted this performance audit from July 2010 to March 2011 at the selected agencies’ offices in the Washington, D.C., metropolitan area. Our work was done in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Below are descriptions of each of the selected investments that are included in this review. USCIS-Transformation is a bureauwide program to move from a paper- based filing system to a centralized, consolidated, electronic adjudication filing system. The C4ISR Common Operating Picture collects and fuses relevant information for Coast Guard commanders to allow them to efficiently exercise authority, while directing and monitoring all assigned forces and first responders, across the range of Coast Guard operations. The Automatic Dependent Surveillance-Broadcast system is intended to be an underlying technology in the Federal Aviation Administration’s plan to transform air traffic control from the current radar-based system to a satellite-based system. The Automatic Dependent Surveillance-Broadcast system is to bring the precision and reliability of satellite-based surveillance to the nation’s skies. The En Route Automation Modernization system is to replace the current computer system used at the Federal Aviation Administration’s high- altitude en route centers. The current system is considered the backbone of the nation’s airspace system and processes flight radar data, provides communications, and generates display data to air traffic controllers. The current Modernized e-File system is a Web-based platform that supports electronic tax returns and annual information returns for large corporations and certain tax-exempt organizations, as well as individual Form 1040 and other schedules and supporting forms. This system is being updated to include the electronic filing of the more than 120 remaining 1040 forms and schedules. Combining these efforts is intended to streamline tax return filing processes and reduce the costs associated with paper tax returns. The Payment Application Modernization investment is an effort to modernize the current mainframe-based software applications that are used to disburse approximately 1 billion federal payments annually. The existing payment system is a configuration of numerous software applications that generate check, wire transfer, and Automated Clearing House payments for federal program agencies, including the Social Security Administration, Internal Revenue Service, Department of Veterans Affairs, and others. HealtheVet Core was a set of initiatives to improve health care delivery, provide the platform for health information sharing, and update outdated technology. The investment was to support veterans, their beneficiaries, and providers by advancing the use of health care information and leading edge IT to provide a patient-centric, longitudinal, computable health record. According to department officials, the HealtheVet Core investment was “stopped” in August 2010. The Medical Legacy program is an effort to provide software applications necessary to maintain and modify the department’s Veterans Health Information Systems and Technology Architecture. The Disability Case Processing System is intended to provide common functionality and consistency to support the business processes of each state’s Disability Determination Services. Ultimately, it is to provide analysis functionality, integrate health IT, improve case processing, simplify maintenance, and reduce infrastructure growth costs. The Intelligent Disability program is intended to reduce the backlog of disability claims, develop an electronic case processing system, and support efficiencies in the claims process. Table 2 provides additional details for each of the selected investments in our review. In addition to the contact named above, the following staff also made key contributions to this report: Carol Cha, Assistant Director; Shannin O’Neill, Assistant Director; Alina Johnson; Emily Longcore; Lee McCracken; and Kevin Walsh.
Each year the federal government spends billions of dollars on information technology (IT) investments. Given the importance of oversight, the Office of Management and Budget (OMB) established a public Web site, referred to as the IT Dashboard, that provides detailed information on about 800 federal IT investments, including assessments of actual performance against cost and schedule targets (referred to as ratings). In the second of a series of Dashboard reviews, GAO was asked to (1) determine OMB's efforts to improve the Dashboard and how it is using data from the Dashboard, and (2) examine the accuracy of the Dashboard's cost and schedule performance ratings. To do so, GAO analyzed documentation on OMB oversight efforts and Dashboard improvement plans, compared the performance of 10 major investments from five agencies with large IT budgets against the ratings on the Dashboard, and interviewed OMB and agency officials. Since GAO's first review, in July 2010, OMB has initiated several efforts to increase the Dashboard's value as an oversight tool, and has used the Dashboard's data to improve federal IT management. These efforts include streamlining key OMB investment reporting tools, eliminating manual monthly submissions, coordinating with agencies to improve data, and improving the Dashboard's user interface. Recent changes provide new views of historical data and rating changes. OMB anticipates that these efforts will increase the reliability of the data on the Dashboard. To improve IT management, OMB analysts use Dashboard data to track investment changes and identify issues with performance. OMB officials stated that they use these data to identify poorly performing IT investments for review sessions by OMB and agency leadership. OMB reported that these sessions and other management reviews have resulted in a $3 billion reduction in life-cycle costs, as of December 2010. While the efforts above as well as initial actions taken to address issues GAO identified in its prior review--such as OMB's updated ratings calculations to factor in ongoing milestones to better reflect current status--have contributed to data quality improvements, performance data inaccuracies remain. The ratings of selected IT investments on the Dashboard did not always accurately reflect current performance, which is counter to the Web site's purpose of reporting near real-time performance. Specifically, GAO found that cost ratings were inaccurate for six of the investments that GAO reviewed and schedule ratings were inaccurate for nine. For example, the Dashboard rating for a Department of Homeland Security investment reported significant cost variances for 3 months in 2010; however, GAO's analysis showed lesser variances from cost targets for the same months. Conversely, a Department of Transportation investment was reported as on schedule on the Dashboard, which does not reflect the significant delays GAO has identified in recent work. These inaccuracies can be attributed to weaknesses in how agencies report data to the Dashboard, such as providing erroneous data submissions, as well as limitations in how OMB calculates the ratings. Until the selected agencies and OMB resolve these issues, ratings will continue to often be inaccurate and may not reflect current program performance. GAO is recommending that selected agencies take steps to improve the accuracy and reliability of Dashboard information and OMB improve how it rates investments relative to current performance and schedule variance. Agencies generally concurred with the recommendations; OMB did not concur with the first recommendation but concurred with the second. GAO maintains that until OMB implements both, performance may continue to be inaccurately represented on the Dashboard.
About 90 percent of the costs associated with GWOT fall into two accounts—military personnel and operation and maintenance. Military personnel funds provided to support GWOT cover the pay and allowances of mobilized reservists as well as special payments or allowances for all qualifying military personnel, both active and reserve, such as Imminent Danger Pay and Family Separation Allowance. Operation and maintenance funds provided to support GWOT are used for a variety of purposes, including transportation of personnel, goods, and equipment; unit operating support costs; and intelligence, communications, and logistics support. We have reported on several occasions, including in 1999 and 2003, that estimating the cost of ongoing military operations is difficult. This is because operational requirements can differ substantially during the fiscal year from what was assumed in preparing budget estimates. The result can be that operations can cost more or less than originally estimated. If operations cost more than originally estimated, DOD may use a number of authorities provided to it, including transferring and reprogramming funds and reducing or deferring planned spending for peacetime operations, to meet its needs. DOD uses “transfer authority” to shift funds between appropriation accounts, for example, between military personnel and operation and maintenance. Transfer authority is granted by the Congress to DOD usually pursuant to specific provisions in authorization or appropriation acts. The ability to shift funds within a specific appropriation account, like operation and maintenance, is referred to as “reprogramming.” In general, DOD does not need statutory authority to reprogram funds within an account as long as the funds to be spent would be used for the same general purpose of the appropriation and the reprogramming does not violate any other specific statutory requirements or limitations. For example, DOD could reprogram operation and maintenance funds originally appropriated for training to cover increased fuel costs because both uses meet the general purpose of the operation and maintenance account, as long as the shift does not violate any other specific statutory prohibition or limitation. In fiscal years 2004 and 2005, the military services received about $52.4 billion and about $62.1 billion, respectively, in supplemental appropriations for GWOT military personnel and operation and maintenance expenses. The Army, Air Force, and Navy also received funds for GWOT through their annual appropriations. However, DOD and the military services have lost visibility over these funds provided through annual appropriations, including knowing how much, if any, was used to support GWOT in fiscal years 2004 and 2005. As shown in table 1, DOD received funding through supplemental appropriations to support GWOT in both fiscal years 2004 and 2005. To pay for the military personnel and operation and maintenance costs of GWOT in fiscal year 2004, the Congress appropriated about $52.4 billion to DOD. Of the $52.4 billion, the Congress provided the military services about $50.4 billion in the Emergency Supplemental Appropriations Act for Defense and for the Reconstruction of Iraq and Afghanistan, 2004. In addition, the services used $120 million of the funds provided in late fiscal year 2004 through Title IX of the Department of Defense Appropriations Act, 2005. DOD also transferred about $1.9 billion from funds originally appropriated to the Iraqi Freedom Fund. The Iraqi Freedom Fund provides 2-year funds that can be transferred to the services’ accounts for additional expenses for ongoing military operations in Iraq, operations authorized by the Authorization for Use of Military Force, and other operations and related activities in support of GWOT. Of the $1.9 billion, about $860 million was provided through the Emergency Wartime Supplemental Appropriations Act, 2003, while about $1.1 billion was provided through the Emergency Supplemental Appropriations Act for Defense and for the Reconstruction of Iraq and Afghanistan, 2004. For fiscal year 2005, the military services had about $62.1 billion available to pay for the military personnel and operation and maintenance costs of GWOT. Of this, the Congress appropriated about $44.5 billion through the Emergency Supplemental Appropriations Act for Defense, the Global War on Terror, and Tsunami Relief, 2005. The military services also had the remaining balance—about $17.3 billion—that was provided through Title IX of the Department of Defense Appropriations Act, 2005, and was available for obligation in fiscal year 2005 to help pay for the military personnel and operation and maintenance costs of GWOT. In addition, as of July 2005, DOD had transferred about $348 million from funds originally appropriated to the Iraqi Freedom Fund. In addition to funds DOD received through supplemental appropriations for GWOT, beginning in fiscal year 2003, the administration increased DOD’s annual appropriation request by more than $10 billion per year. DOD described these funds as being intended to support GWOT. According to a representative from the Office of the Under Secretary of Defense (Comptroller), in December 2001 the President directed that his annual budget submission for DOD be increased by about $10 billion annually to support GWOT. Consequently, Program Budget Decision 736, entitled Continuing the War on Terrorism and dated January 31, 2002, was approved by the Under Secretary of Defense (Comptroller). Program Budget Decision 736 provided for increasing DOD’s annual budget request in the amount of more than $10 billion per year plus inflation in fiscal years 2003 through 2007 to enhance the department’s efforts to respond to, or protect against, acts or threatened acts of terrorism against the United States. According to a DOD representative, unless action is taken to reduce these funds in future budgets, Program Budget Decision 736 provides for a permanent increase of about $10 billion per year plus inflation to DOD’s annual budget request to support military operations in the war on terrorism. As shown in table 2, in fiscal years 2004 and 2005, the Army, Air Force, and Navy received additional funds in their annual appropriations—a total of about $7.9 billion in fiscal year 2004 and about $7.6 billion in fiscal year 2005—which DOD described as for support of military operations in the war on terrorism. According to DOD representatives, the Marine Corps did not receive an increase to its annual appropriation through Program Budget Decision 736. Under Program Budget Decision 736, a number of DOD programs were to receive increases in their proposed annual budgets in both fiscal years 2004 and 2005. For example, in fiscal year 2004, Program Budget Decision 736 indicates that about $2.1 billion was for counterterrorism and force protection efforts, about $1.2 billion for combat air patrols over U.S. cities, and about $600 million for such things as depot maintenance and spare parts. Program Budget Decision 736 indicates funds were to be provided to these programs and others in fiscal years 2005 through 2007 as well. According to representatives of the Office of the Under Secretary of Defense (Comptroller), some of the funds in Program Budget Decision 736 were intended to cover costs associated with Operation Noble Eagle while others were intended to cover costs associated with Operation Enduring Freedom. For fiscal years 2004 and 2005, an Office of the Under Secretary of Defense (Comptroller) representative stated the additional funds provided through Program Budget Decision 736 were in the military services’ various appropriations accounts. However, the Office of the Under Secretary of Defense (Comptroller) has no specific information about which programs or activities actually received the funds or how they were eventually expended, including whether they were used in support of GWOT. Once the services received these additional funds, they allocated them to their appropriations accounts based on their judgment of where the funds were most needed. DOD’s accounting systems do not separately identify which appropriations accounts received these funds, and there are no reporting requirements for DOD to identify to which appropriation accounts the funds were allocated. While the military services also stated they received their share of the Program Budget Decision 736 funds as part of their fiscal year 2004 and fiscal year 2005 annual appropriations and that some of the funds were used for war-related expenses, they too could not identify which programs or activities received the funds and could not document what portion of these funds were used for war-related expenses. As a result, although DOD requested these funds to support GWOT, DOD and the military services cannot be certain that they were actually used to support GWOT-related activities. In developing the fiscal year 2005 request for supplemental appropriations to support GWOT, DOD took steps to adjust the request to reflect the receipt of funds provided through Program Budget Decision 736. In a November 2004 memorandum requesting that all DOD components provide their GWOT supplemental appropriations estimates for fiscal year 2005, the Office of the Under Secretary of Defense (Comptroller) stated the following with respect to funds that had already been provided through Program Budget Decision 736: Funding for GWOT missions previously added to the baseline budget (e.g., Program Budget Decision 736, Continuing the War on Terrorism) should be explicitly identified as a reduction to funding requests in those areas, as appropriate. Component requests must consider that that some funding is already in the baseline accounts. Program Budget Decision 736 provided funds for antiterrorism, continental United States combat air patrols, and force protection. The components’ submissions should show the total requirement and note the level of funding already in the baseline for this purpose. The supplemental request will net out the available funding. In the November 2004 memorandum the Office of the Under Secretary of Defense (Comptroller) further stated that the emergency supplemental appropriations request will address the incremental costs above the baseline funding needed to support specific forces and capabilities required to execute Operation Iraqi Freedom, Operation Enduring Freedom, and portions (to be determined) of Operation Noble Eagle. DOD described Operation Noble Eagle as including defending the United States from airborne attacks and maintaining U.S. air sovereignty. This operation had been included in the supplemental appropriations request for fiscal year 2004. None of the military services provided the information requested in the November 2004 memorandum and instead the services requested funds for Operation Noble Eagle. Service budget representatives told us that Program Budget Decision 736 funds were considered as base program (e.g., annual appropriations) issues and not supplemental candidates. According to service budget representatives, they requested funds for Operation Noble Eagle in fiscal year 2005 that were in addition to the funds provided through Program Budget Decision 736. For example, the Navy requested $53.3 million for incremental requirements above its baseline request. The Army requested more than $1 billion in incremental requirements above its baseline. However, in preparing the fiscal year 2005 supplemental appropriations budget request, the Office of Management and Budget did not include Operation Noble Eagle in the President’s budget request because funds had already been included in DOD’s annual appropriation, as described in Program Budget Decision 736. In fiscal year 2004, the difference between supplemental appropriations available to the military services for GWOT military personnel and operation and maintenance expenses compared to reported obligations varied by service. For military personnel, the Navy and Marine Corps reported more in obligations than they received in supplemental appropriations, while for operation and maintenance each of the military services reported more in obligations than it received in supplemental appropriations. To cover the differences (gaps), DOD and the military services took several actions, including transferring funds and reducing or deferring planned spending for peacetime operations. In the case of the Army and Air Force, which each received supplemental appropriations that exceeded its reported obligations for military personnel, this included transferring $801 million and $113 million, respectively, to cover their GWOT operation and maintenance expenses. In some instances, these actions reduced DOD’s flexibility to cover potential gaps in fiscal year 2005. DOD did not explicitly take into account the GWOT funds provided through its annual appropriation that DOD requested for GWOT to help cover the gaps. If it had taken these funds into account it could have reduced the Army’s GWOT gap, eliminated the GWOT gaps of the Air Force and Navy, and been able to defer fewer activities. Within the military personnel accounts, as shown in table 3, the Navy and Marine Corps reported more obligations in support of GWOT than they received in supplemental appropriations. However, these reported gaps were a relatively small portion of the services’ annual military personnel appropriations. For example, the Navy’s reported gap of $40.4 million represents less than 1 percent of its annual military personnel appropriation. In fiscal year 2004, both the Army and Air Force received supplemental appropriations that exceeded their reported obligations for military personnel. The Army and Air Force used these funds to cover operation and maintenance expenses related to GWOT, as discussed below. Within the operation and maintenance accounts, as shown in table 4, in fiscal year 2004 each of the military services reported more in GWOT obligations than it received in supplemental appropriations. The Army reported the largest gap, about $4.3 billion, while the Air Force and Navy reported gaps of $579 million and about $618 million, respectively. The Marine Corps reported the smallest gap, about $195 million. To cover the military services’ gaps between reported fiscal year 2004 obligations and supplemental appropriations, the Office of the Under Secretary of Defense (Comptroller) and the military services used a number of authorities provided to them, including transferring funds and reducing or deferring planned spending for peacetime operations. While involving hundreds of millions or sometimes billions of dollars, in discussing the actions taken to cover the gaps, some service representatives noted that the gaps represented a small percentage of their annual appropriations. Within the services’ annual operation and maintenance accounts we found that the gaps varied by service, ranging from a low of 1.7 percent of the Air Force’s annual operation and maintenance appropriation to a high of 13.7 percent of the Army’s annual operation and maintenance appropriation. In the services’ annual military personnel accounts, all the gaps were less than 1 percent of their annual military personnel appropriations. However, DOD did not explicitly take into account the funds provided through its annual appropriations that it intended for support of GWOT. As discussed earlier, since DOD’s accounting systems do not separately identify the portion of the department’s annual appropriations that were described as having been requested to support GWOT and there are no reporting requirements for DOD to identify to which appropriation accounts the funds were allocated, the military services have lost visibility over these funds and do not know the extent to which they are being used to support GWOT. Consequently, despite having asked for the increase, DOD is not explicitly counting these additional funds when considering funding for GWOT and alternatively took actions that affected its peacetime operations, which may create spending pressures in fiscal year 2005 and later. Each of the military services projected a gap between reported obligations and supplemental appropriations at its midyear budget review. Service representatives told us these projected gaps were reduced over the course of fiscal year 2004 by reviewing their GWOT requirements and, in some instances, seeking to reduce or defer planned spending. With respect to the GWOT gaps faced by the services in fiscal year 2004, we were told the following: For fiscal year 2004, the Army’s reported obligations in its operation and maintenance account exceeded its supplemental appropriations by about $4.3 billion, substantially less than the $10.9 billion it had projected in the account at its midyear budget review. To cover the $4.3 billion, DOD and the Army took a number of actions, including using internal resources and passing the remaining amount on to the Army’s major commands to be absorbed by reducing or deferring planned peacetime spending to meet its GWOT needs. More specifically, to cover the Army’s gap, the Under Secretary of Defense (Comptroller) transferred about $3 billion from the working capital funds of the Army, Air Force, and Navy—including $1.3 billion from the Army, about $1.5 billion from the Air Force, and $200 million from the Navy. In addition, about $801 million was transferred from the Army’s military personnel account to help cover the gap in the Army’s operation and maintenance account, while about $500 million was transferred from other DOD-wide accounts. The major Army commands absorbed the remainder. For example, to cover its portion of the gap, the Army Materiel Command reprioritized or deferred about $184 million in depot maintenance until fiscal year 2005 for such programs as the Patriot and Hellfire missile systems. It also reduced or deferred the number of available training hours for some of its nondeployed units. However, Army Materiel Command representatives told us that in some instances, the training hours they deferred to help cover the fiscal year 2004 gap were deferred until fiscal year 2006. The Air Force’s gap in its operation and maintenance account of about $579 million was substantially less than the $1.5 billion it had projected in the two accounts at its midyear budget review. To cover the $579 million gap, the Air Force took a number of actions, including transferring $113 million in funds available in its overall military personnel appropriation account, decreasing peacetime flying hours, reducing depot maintenance, and deferring facility sustainment restoration and modernization projects until fiscal year 2005. The Air Force’s major commands also absorbed a portion of the gap. For example, the Air Combat Command absorbed its share of the GWOT gap, about $92 million, by reducing or deferring its fiscal year 2004 peacetime spending. Approximately $46 million, or half of the Air Combat Command’s $92 million share of the gap, was covered by reducing its peacetime flying hour program by about 6,800 hours. While reducing its peacetime flying hours helped the Air Combat Command cover its portion of the gap, Air Combat Command representatives told us the reduced training opportunities created a training backlog, which could affect pilot readiness for future combat missions. The Navy’s combined gap for fiscal year 2004 of about $659 million in its military personnel and operation and maintenance accounts was less than its midyear projection of $931 million. To cover the $659 million gap, the Navy canceled some peacetime spending, including various nonreadiness operation and maintenance spending and various infrastructure projects. Of the Navy’s major commands, the Atlantic Fleet and Pacific Fleet absorbed the largest share of the gap for fiscal year 2004. For example, the Atlantic Fleet absorbed about $110 million by reducing air operations and ship depot maintenance activities. Navy budget representatives noted that the gap represented about 1 percent of the total baseline funding available for aircraft operations and ship depot maintenance for the Navy in that fiscal year. In addition, the Navy canceled or deferred procurement actions for the MH-60R Seahawk helicopter, V-22 Osprey, F/A-18 Hornet, and Joint Tactical Radio System. The Marine Corps’ combined gap in its military personnel and operation and maintenance appropriations accounts of about $225 million for GWOT in fiscal year 2004 was also less than the $446 million projected at its midyear budget review. To cover the $225 million gap, the Marine Corps reduced or deferred spending in noncritical areas, such as facility improvements. The Navy provided the Marine Corps with funds from its base operating support and facilities sustainment restoration and modernization appropriations accounts and with $121 million that was transferred to the Navy from the U.S. Transportation Command’s Working Capital Fund. According to Marine Corps representatives, a portion of the gap was also absorbed by the Marine Corps’ annual military personnel and operation and maintenance appropriations accounts. The Navy provided us a detailed discussion of the process used in addressing gaps. A Navy budget representative said that the Navy analyzed its entire $116.8 billion in baseline funding (which includes both the original $114 billion baseline and the added $2.8 billion for Program Budget Decision 736 initiatives) as potential financing sources for its GWOT needs. According to the Navy representative the Navy’s internal analysis first looked at funding flexibility in baseline programs resulting from changes in current year execution. For example, certain baseline program requirements change from year to year as a result of development issues, schedule and implementation delays, manufacturing problems, changes in requirements or inventory levels, and labor disputes. The accumulated value of those changes in a given execution year, such as fiscal year 2004, may have made any financial resources excess to fiscal year 2004 requirements available to fund GWOT needs. Although their specific identification as such would be lacking, they stated that previously baselined Program Budget Decision 736 requirements could have been included, by implication, as part of those deliberations. For example, by the end of fiscal year 2004, based on delayed execution, about $136 million was reallocated from base infrastructure support, maintenance, and repair to fund Operation Iraqi Freedom costs. If insufficient funding sources were identified as part of an execution analysis, then it would be necessary to make affirmative decisions about reducing baseline programs to fund the balance of the GWOT needs. Those reductions, for the most part, had subsequent programmatic and financial impacts. Those changes required to support the increased GWOT needs were monitored and approved by the Office of the Under Secretary of Defense (Comptroller) staff during their annual budget and execution reviews. Some of the changes were recoverable (such as specific procurement and depot maintenance items considered deferrable and that could be funded with a subsequent year's money) and some changes were nonrecoverable (items considered nondeferrable current expenses, where the performance period has lapsed, but for which a subsequent year's funding is now available to fully meet that year’s requirements). For example, of the Navy and Marine Corps’ approximately $1.6 billion in absorbed costs in all appropriation accounts for the Department of the Navy ($1.4 billion was for Navy items, $200 million was for Marine Corps items), nearly 40 percent of the fiscal year 2004 requirements were considered recoverable with subsequent year’s funding. This included $200 million for drawing down the Navy Working Capital Fund, which was included in the Navy’s fiscal year 2005 supplemental appropriations request. As previously discussed, DOD used the military services’ working capital funds as a source of cash to provide funds for GWOT expenditures in fiscal year 2004. DOD’s working capital funds finance the operations of two fundamentally different types of support organizations: stock fund activities, which provide spare parts and other items to military units and other customers, and industrial activities, which provide depot maintenance, research and development, and other services, such as those provided by the Defense Financial Accounting Service, Defense Information Systems Agency, Defense Commissary Agency, and U.S. Transportation Command. In fiscal year 2004, DOD transferred about $3 billion from the military services’ working capital funds to help cover the Army’s gap between reported obligations and supplemental appropriations. While such transfers from the services’ working capital funds helped DOD cover its fiscal year 2004 gap, the transfers have left few working capital funds available to be used in fiscal year 2005. For example, to help cover the Army’s operation and maintenance gap, about $980 million was transferred from the U.S. Transportation Command’s Transportation Working Capital Fund during fiscal year 2004. This transfer was made possible due to a surplus of transportation charges collected from the military services by the U.S. Transportation Command during the year. However, a U.S. Transportation Command representative told us the transfers have left the fund’s balance below the minimum goal of $517 million. Specifically, with the transfer of almost $1 billion in fiscal year 2004 to help cover the Army’s operation and maintenance gap, as of July 2005, there was only $168 million remaining in the fund, well below the minimum goal for the year. Further, the representative stated that the projected fund balance for the end of fiscal year 2005 is about $231 million, still below the minimum goal. In determining how to cover the gaps between the services’ supplemental appropriations and reported GWOT obligations for military personnel and operation and maintenance expenses, DOD did not explicitly take into account the almost $7.9 billion in funds the Army, Air Force, and Navy received in their annual appropriations through Program Budget Decision 736 to help fund GWOT. This includes $1.3 billion received by the Army, $3.5 billion received by the Air Force, and $3 billion received by the Navy. If counted in fiscal year 2004 and applied to the services’ military personnel and operation and maintenance accounts, these amounts could have reduced the Army’s need to transfer funds from other activities and eliminated the GWOT gaps for the Air Force and the Navy, as shown in table 5. However, the services acknowledge that they have lost visibility over the Program Budget Decision 736 funds after fiscal year 2003 and do not know whether any of the funds were used in support of GWOT. We discussed our analysis with DOD representatives at each of the services’ budget offices, who disagreed with our depiction of Program Budget Decision 736. These representatives believed that our analysis should take into account the fact that the funds provided through Program Budget Decision 736 were included in DOD’s baseline budget and therefore were already taken into account when considering funds available for GWOT. Service budget representatives made the following observations regarding the Program Budget Decision 736 funds: Once merged into those baseline budgets, full justification for funding is provided in the annual President’s budget request. For example, increased funding for additional security personnel and physical security equipment were merged with existing program lines and not subsequently separately identified as to how they were initially funded or sustained over the years. Once the Program Budget Decision 736 funds were in the baseline budget, they were not in support of specific contingency operations, for which the Department of Defense Financial Management Regulation, Volume 12, Chapter 23, Contingency Operations, requires separate documentation and execution tracking, and no such requirement exists for “baselined” funds, other than the annual justification exhibits. That is, Chapter 23 only requires reporting incremental costs (costs not already in the baseline), and not total costs. Subsequent to Program Budget Decision 736 additional requirements were placed on the services fiscal year 2004-2009 spending program without accompanying funds. To meet these requirements service budget representatives said that they looked in part to the funds provided in Program Budget Decision 736. We recognize that DOD’s annual budget submissions include justification for all the department’s activities, including those funded through Program Budget Decision 736. However, the funds provided through Program Budget Decision 736 were identified as being in support of GWOT. While service budget representatives noted that the documentation and tracking requirements contained in the Department of Defense Financial Management Regulation, Volume 12, Chapter 23, Contingency Operations, do not apply to the funds provided through Program Budget Decision 736, we believe that DOD should have been tracking these funds in light of their connection to GWOT. While the services’ budget representatives told us that they took the funds provided through Program Budget Decision 736 into account in addressing GWOT funding needs, we note that once these funds were merged into the services’ baseline budgets visibility was lost so there is no assurance as to how the funds were taken into account or used. Our analysis of the military services’ reported obligations for the first 8 months of fiscal year 2005 and the military services’ forecasts as of June 2005 of full fiscal year 2005 costs suggest the services’ military personnel and operation and maintenance GWOT obligations could exceed available supplemental appropriations for the war in some accounts. Our projections of reported GWOT obligations through May 2005 suggest the services should have sufficient supplemental appropriations for military personnel expenses in fiscal year 2005 but that there could be gaps for operation and maintenance expenses for the Army and the Marine Corps. The services’ more detailed forecasts suggest a gap for military personnel expenses for the Air Force of about $500 million, and gaps for operation and maintenance expenses for the Army and Air Force of about $2.7 billion and about $1 billion, respectively. The Marine Corps expects its supplemental appropriations will be sufficient to cover its GWOT costs. To cover any gaps and meet its GWOT needs, DOD and the services plan to take a variety of actions, including reprogramming funds from annual appropriations and reducing or deferring planned spending for peacetime operations. Our assessment of reported obligations in fiscal year 2005 through May 2005 suggests that the military services should have sufficient supplemental appropriations for military personnel expenses in fiscal year 2005. As figure 1 shows, with 8 months, or about 67 percent, of the fiscal year gone, the Marine Corps has obligated 46 percent of its available supplemental appropriations; the Army 54 percent; and the Air Force and Navy 58 percent each. Our assessment of reported obligations within the military services’ operation and maintenance accounts through May 2005 suggests that the supplemental appropriations provided to the services for GWOT should be sufficient for the Air Force and Navy but not for the Army and Marine Corps. As shown in figure 2, the percentage of available supplemental appropriations obligated in the services’ operation and maintenance accounts as of May 2005, ranged from 49 percent for the Navy and 52 percent for the Air Force to 71 percent for the Army and the Marine Corps. We recognize that funds are not obligated equally each month throughout the fiscal year. However, we believe that the further into the fiscal year the closer to 100 percent obligations should be relative to appropriations if all appropriated funds are likely to be obligated. Consequently, given these obligation rates, we believe that if the Army and Marine Corps continue to obligate funds at the current rate or higher, their reported obligations within the operation and maintenance accounts could exceed available supplemental appropriations in fiscal year 2005, requiring them to use other authorities provided to them to cover the difference. However, as discussed below, the Air Force believes it will have an operation and maintenance gap, while the Marine Corps believes it will have sufficient funds for operation and maintenance. Each of the military services completed a midyear budget review for the Office of the Under Secretary of Defense (Comptroller), including a forecast of its full fiscal year 2005 GWOT needs. The Army concluded that it would not have sufficient supplemental appropriations to cover its projected GWOT operation and maintenance obligations, while the Air Force indicated its combined military personnel and operation and maintenance obligations would exceed available supplemental appropriations. With respect to the Army’s and Air Force’s midyear budget review projections: The Army forecast a GWOT gap of about $2.7 billion in its operation and maintenance account, of which a large component—about $1 billion—is attributed to higher fuel costs due to, among other things, the increase in June 2005 of DOD’s composite fuel rate from $56.28 per barrel to $73.08. Other components of the forecasted gap include support of the Army’s modular force initiative; higher spending in the second half of fiscal year 2005 as compared to the first half, resulting from deferred spending early in the fiscal year; and higher spending on recruiting and retention efforts, primarily for the Army Reserve. According to the Army, the modular force initiative and its reconstitution and reset efforts are being treated as GWOT costs in fiscal year 2005. The Air Force forecast a GWOT gap of about $500 million in its military personnel account and about $1 billion in its operation and maintenance account, for a total gap of about $1.5 billion. Air Force representatives attributed the gap in its military personnel account primarily to having higher-than-anticipated end-strength levels, and stated that the $1 billion gap in its operation and maintenance account is to replenish the Transportation Working Capital Fund, which was drawn down last year to help cover the Army’s fiscal year 2004 GWOT gap. Regarding the projected military personnel gap, Air Force representatives stated that funds were subsequently transferred to pay for prior obligations at higher-than-anticipated end-strength levels. Since then, the Air Force has corrected the end-strength imbalance and expects to be within end strength for GWOT during the remainder of the fiscal year. As a result of these actions, Air Force representatives no longer project a military personnel gap for GWOT in fiscal year 2005. The Navy projected a small gap of about $36 million for GWOT at the time of its midyear budget review, which it has since covered with cost savings from shifting the bulk of its transportation of equipment and supplies from air to sea. The Marine Corps indicated that its supplemental appropriations should be sufficient to cover reported GWOT obligations for fiscal year 2005. In considering the services’ midyear budget reviews, our analysis of the Navy and Marine Corps GWOT obligations indicates substantial under execution in the Navy’s operation and maintenance account and the Marine Corps’s military personnel account. In response, the Navy stated that it expects its rate of obligating GWOT funds to increase toward the end of fiscal year 2005 due to, among other things, providing additional support in theatre and on the ground in Iraq as part of Joint Sourcing. According to a Navy representative, the Navy had about 5,000 personnel stationed on the ground in Kuwait, Iraq, and Afghanistan at the end of fiscal year 2004. By the end of fiscal year 2005, the Navy plans to have about 8,500 personnel in theatre with the additional personnel having begun to deploy in May 2005. The Marine Corps stated it expects to obligate an additional $220 million in military personnel funds due to the new death gratuity benefit, while another $265 million in military personnel funds will be used to replenish the Marine Corps’s annual appropriation for funds reprogrammed earlier in the fiscal year to buy additional body-armor and other equipment to counter the use of improvised explosive devices in Iraq. To cover the forecasted GWOT needs for fiscal year 2005, DOD, the Army, and the Air Force have identified a number of steps they plan to take. These include exercising a number of authorities provided to them, such as transferring and reprogramming funds from annual appropriations and reducing or deferring planned spending for peacetime operations. The Army, the service with the largest forecasted gap in its operation and maintenance account, plans to take a variety of actions to meet its fiscal year 2005 GWOT funding needs. Some actions include taking steps to transfer or reprogram funds. For example, DOD reprogrammed more than $800 million in funds in May 2005 from the military personnel accounts of the Air Force, Navy, Marine Corps, and Army National Guard, and $250 million from the Army’s Working Capital Fund, to the Army to meet urgent GWOT needs. Other actions the Army plans to take to help fund GWOT in fiscal year 2005 involve reducing or deferring current costs. For example, the Army reports that it has been able to reduce its fiscal year 2005 Logistics Civil Augmentation Program (LOGCAP) contract costs by about $890 million by reviewing and reducing current LOGCAP requirements. In discussing its plans to meet its fiscal year 2005 GWOT needs, the Army plans to use any surplus funds in its working capital fund to help cover any fiscal year 2005 GWOT gaps. However, due to the transfers from the services’ working capital funds to cover the fiscal year 2004 gaps, as discussed above, few assets remain elsewhere to cover the Army’s fiscal year 2005 GWOT gap. Should the Army’s GWOT gap be larger than forecasted, the Army may have to absorb the difference in its annual appropriation. The Air Force also plans to take a variety of actions to address the gap between its supplemental appropriations and reported operation and maintenance obligations for GWOT. These include decreasing peacetime flying hours by $700 million, reducing or deferring depot maintenance activities by $400 million, and freezing activities involving facility sustainment and restoration modernization projects. Other areas that could be targeted for cost reductions or deferments include noncritical travel and other supplies and equipment. To meet its GWOT needs in fiscal year 2005, DOD is again not explicitly considering the Program Budget Decision 736 funds to support GWOT that were provided to the military services through their annual appropriations. However, as discussed earlier, unlike in fiscal year 2004, in fiscal year 2005 some of the funds provided in Program Budget Decision 736 are being used to fund Operation Noble Eagle, which had previously been funded as part of GWOT through supplemental appropriations. In fiscal year 2004 DOD had included $2.2 billion in its budget request for Operation Noble Eagle. Adjusting for Operation Noble Eagle at the fiscal year 2004 funding level would result in more than $5.4 billion in funds included in Program Budget Decision 736 in support of GWOT for the military services remaining available in fiscal year 2005. If counted in fiscal year 2005, the amounts potentially could reduce the need for reprogrammings from other activities and could reduce the Army’s and eliminate the Air Force’s GWOT gaps. Instead, as in fiscal year 2004, the Office of the Under Secretary of Defense (Comptroller) and the military services will again meet those needs by taking actions that may affect DOD’s peacetime operations, such as reducing or deferring planned spending. In some instances, these funding reductions and deferments could add to future spending pressures in fiscal year 2006 or potentially in later years and run the risk of producing a large “bow wave” of requirements. This can have both short-term and long-term impacts. In the short term, deferring spending can lead to higher costs than expected later in the current fiscal year, which may need to be covered by additional transfers and reprogrammings. In the long term, continued deferments can lead to higher costs. The extent to which one considers that GWOT funding has been sufficient depends on whether one counts both funding provided through supplemental appropriations and funding included in DOD’s annual appropriation, which DOD requested for GWOT. The administration increased DOD’s annual appropriation request by more than $10 billion annually beginning in fiscal year 2003 to support GWOT, with the military services receiving about $7.9 billion of that amount in fiscal year 2004 and about $7.6 billion in fiscal year 2005. The military services absorbed the increase into their annual appropriations and allocated it based on their judgment of where the funds were most needed. Since DOD’s accounting systems do not separately identify these additional appropriations and there are no reporting requirements for DOD to identify to which appropriation accounts the funds were allocated, the military services have lost visibility over these funds and do not know the extent to which they are being used to support GWOT. Consequently, despite having asked for the increase, DOD is not explicitly counting the more than $10 billion when considering funding for GWOT. In fiscal year 2004, the military services reported obligations in support of GWOT that were above the supplemental funds appropriated by the Congress. In response, DOD used authorities granted to it, including transferring funds and reducing or deferring planned spending for peacetime operations, to cover the gaps. However, if the additional funds that were included in DOD’s annual appropriation to help fund the war are included in the analysis, those funds could potentially have reduced the Army’s gap and eliminated the gap for the Air Force and Navy in fiscal year 2004. In fiscal year 2005, the Army and the Air Force are again projecting obligations for the war above their supplemental appropriations, and DOD is taking steps to cover the gaps. As was the case in fiscal year 2004, the additional funds that were included in DOD’s annual appropriation to help fund the war potentially could reduce or eliminate the projected gaps for the Army and Air Force. With military operations in Iraq and Afghanistan ongoing, and the likely need for DOD to request additional funds to support GWOT, it is important that DOD fulfill its role as a steward of taxpayer funds by taking steps to account for all the funds it receives for the war. To improve the visibility and accountability of DOD’s use of funds for GWOT, we recommend that the Secretary of Defense, in future requests for supplemental appropriations, adjust such requests to reflect the additional funds DOD requested and received in its annual appropriations to support GWOT and provide the Congress with an explanation of these adjustments. We further recommend that in addressing any future GWOT funding needs the Secretary consider the additional GWOT funds provided through the department’s annual appropriation when assessing how to cover expenses for the war and document its decisions. Because DOD did not concur with our recommendation to adjust its future supplemental appropriations requests to reflect the additional funds the department requested and received in its annual appropriations to support GWOT and explain these adjustments to the Congress, we have no confidence that the Congress will receive the information that we believe the Congress needs to properly assess DOD’s requests for supplemental appropriations to support the war. Further, because the amount of funds DOD is receiving to support GWOT through its annual appropriations is substantial—more than $10 billion annually—the Congress should consider directing DOD, when it submits future supplemental appropriations requests, to provide an explanation of how such requests reflect the funds DOD requested and already received in its annual appropriations to support GWOT. DOD provided written comments on a draft of this report. Its comments are discussed below and are reprinted in appendix II. DOD did not concur with our recommendations. DOD further commented that the report confuses a Program Budget Decision, which is an internal document, with the President’s budget, which is the official explanation of DOD’s budget request, and that funds are not appropriated in accordance with a Program Budget Decision. In addition, DOD commented that the report’s focus on the Program Budget Decision results in the inaccurate conclusion that if DOD had considered these funds it could have reduced the Army’s GWOT gap and eliminated the GWOT gaps of the Air Force and Navy. In that regard, DOD stated that the only resources available to the department are those appropriated by the Congress and these funds were considered when determining the needs and expenses of the war. We recognize that a Program Budget Decision is an internal document and that the President’s budget is the official explanation of DOD’s budget request and that funds appropriated are determined by the Congress—not by either a Program Budget Decision or the President’s budget. In our report, we refer to Program Budget Decision 736 and the President’s budget not to establish how much money the Congress appropriated to support GWOT, but to establish how much money DOD intended for GWOT. As stated in our report, according to a representative from the Office of the Under Secretary of Defense (Comptroller), in December 2001 the President directed that his annual budget submission for DOD be increased by about $10 billion annually to support GWOT. Consequently, Program Budget Decision 736, entitled Continuing the War on Terrorism and dated January 31, 2002, was approved by the Under Secretary of Defense (Comptroller). Program Budget Decision 736 provided for increasing DOD’s annual budget request in the amount of more than $10 billion per year plus inflation in fiscal years 2003 through 2007 to enhance the department’s efforts to respond to, or protect against, acts or threatened acts of terrorism against the United States. We therefore believe that since the funds referenced in Program Budget Decision 736 were specifically identified as being requested in support of GWOT, DOD should maintain visibility over how these funds were used to support GWOT. We believe that if DOD asks for a significant increase in appropriations and explains that the increase is needed to support GWOT, DOD should be able to show that it actually used those funds for GWOT. DOD did not concur with our recommendations that the Secretary of Defense (1) adjust future supplemental appropriations requests to reflect the additional funds DOD received in its annual appropriations to support GWOT and explain these adjustments to the Congress and (2) also consider the additional GWOT funds provided through DOD’s annual appropriations in addressing any future GWOT funding needs. In commenting on our first recommendation, DOD stated that the department’s supplemental appropriations request accounts for all relevant adjustments to the annual appropriation bill. DOD also commented that it builds and submits supplemental appropriations requests based on the incremental cost of the operation, which it described as those additional costs to the DOD component conducting the operation that are not covered in their existing budgets and would not have been incurred had they not been supporting the contingency. It is not apparent, however, that DOD’s request for supplemental appropriations for fiscal year 2004 in fact reflected amounts already appropriated. The President’s fiscal year 2005 supplemental appropriations request did reflect amounts already enacted, but only because the Office of Management and Budget, not DOD, made the adjustments. As we discuss in this report, DOD included a $10 billion increase in its fiscal year 2004 annual appropriations in order to support GWOT. In its Program Budget Decision 736, DOD stated that $1.2 billion of that amount would be used for combat air patrols over U.S. cities, which is part of Operation Noble Eagle. At the same time, in its fiscal year 2004 supplemental appropriations request for GWOT, DOD included funding for Operation Noble Eagle, but without explaining why it needed amounts in addition to those that the Congress already provided. In addition, although DOD stated that the department’s supplemental appropriations request accounts for all relevant adjustments to the annual appropriation bill, as stated in our report, in a November 2004 memorandum issued by the Office of the Under Secretary of Defense (Comptroller) the Comptroller’s office sought to adjust DOD’s supplemental appropriations request for fiscal year 2005 to reflect funds already provided. In that memorandum, the Office of the Under Secretary of Defense (Comptroller) stated that funding in fiscal year 2005 for GWOT missions previously added to the baseline budget (e.g., Program Budget Decision 736, Continuing the War on Terrorism) should be explicitly identified as a reduction to funding requests in those areas, as appropriate. The memorandum further requested that the components’ submissions should show the total requirement and note the level of funding already in the baseline for this purpose. The memorandum directed that the services’ supplemental appropriations requests net out the available funding and address the incremental costs above the baseline funding needed to support specific forces and capabilities required to execute Operation Iraqi Freedom, Operation Enduring Freedom, and portions (to be determined) of Operation Noble Eagle. However, as stated in our report, none of the military services provided the information requested in the November 2004 memorandum and instead the military services requested supplemental appropriations for Operation Noble Eagle. Nevertheless, in preparing the fiscal year 2005 supplemental appropriations request, the Office of Management and Budget did not include Operation Noble Eagle in the President’s budget request because funds had already been included in DOD’s annual appropriation, pursuant to DOD’s request, as described in Program Budget Decision 736. We believe that our recommendation has merit and have retained it. In addition, since DOD does not agree with the recommendation and the amount of funds at issue is substantial—more than $10 billion annually— we have added a matter for congressional consideration. Specifically, the Congress should direct DOD, when it submits future supplemental appropriations requests, to provide an explanation of how such requests reflect the additional funds that were addressed in Program Budget Decision 736 and which DOD requested and received in its annual appropriations to support GWOT. With respect to our second recommendation, DOD commented that it considers all funds provided through the department’s annual appropriation when addressing how to cover expenses for the war. We recognize that DOD reviews all funds when determining how to cover its GWOT needs. However, DOD, as it explained in Program Budget Decision 736, intended increased annual appropriations to support GWOT, but then lost visibility of the funds requested. There is no documentation, therefore, regarding how the department took the funds that it requested into account or whether it was applying the entire amount to cover its GWOT needs. We believe that since DOD stated that the additional annual funds were needed to support GWOT, and DOD continues to include this funding in its request for annual appropriations, to fulfill its role as a steward of taxpayer funds DOD should explicitly maintain visibility over how these funds are used to support GWOT and consider the entire amount to be available for GWOT. We therefore continue to believe our recommendation has merit and have retained it, including expanding it to recommend that DOD also document its decisions. We are sending copies of this report to other interested congressional committees; the Secretary of Defense; the Under Secretary of Defense (Comptroller); and the Director, Office of Management and Budget. Copies of this report will also be made available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-9619 or pickups@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Principal contributors to this report were Steve Sternlieb, Assistant Director; Richard K. Geiger; Wesley A. Johnson; James Nelson; and David Mayfield. To identify funding for the Global War on Terrorism (GWOT), we reviewed applicable annual and supplemental Department of Defense (DOD) appropriations in fiscal years 2004 and 2005. We also reviewed DOD reports on the transfer of funds from the Iraqi Freedom Fund to support GWOT activities, and DOD reports on the transfer or reprogramming of funds among various appropriation accounts or budget activities to support GWOT. In addition, we reviewed material related to the decision to add funds to DOD’s annual appropriation to support GWOT, specifically Program Budget Decision 736, entitled Continuing the War on Terrorism, dated January 31, 2002, and approved by the Under Secretary of Defense (Comptroller). To assess the extent of differences between supplemental appropriations and reported obligations for GWOT, we compared supplemental appropriations provided to the military services to reported obligations in fiscal year 2004 and reported obligations through May 2005 and assessed obligations through May 2005 for fiscal year 2005. Specifically, we identified applicable supplemental appropriations in fiscal years 2004 and 2005 and compared them to the reported amounts obligated by each service in DOD’s Supplemental and Cost of War Execution Reports. We limited our review to the obligation of funds appropriated for military personnel and operation and maintenance for the Army, Air Force, Navy, and Marine Corps, for both active and reserve forces, because they represented the majority of the funds obligated in fiscal years 2004 and 2005, about 90 percent in each year. We excluded classified programs from our review, because obligations for those programs are not reported in DOD’s Supplemental and Cost of War Execution Reports. We did not review the obligation of funds for investment, which are used for procurement; military construction; and research, development, test, and evaluation. In addition, for fiscal year 2005, we reviewed the latest available obligation data and held discussions with the military services on the results of their midyear budget reviews. We compared the services’ reported military personnel and operation and maintenance obligations through May 2005, the latest available obligation data at the time of our review, to the supplemental appropriations provided to calculate the proportion of funds obligated through May. We then compared those proportions to the proportion of the fiscal year that has elapsed through May—which represents 67 percent of the fiscal year—to assess whether based on obligations through May funding is likely to be adequate. We recognize that funds are not obligated equally each month throughout the fiscal year. However, we believe that the further into the fiscal year the closer to 100 percent obligations should be relative to appropriations if all appropriated funds are likely to be obligated. GWOT obligations provided in this report are DOD’s claimed obligations as reported in the Supplemental and Cost of War Execution Reports. In related work, we have reported these data to be of questionable reliability. For example, we found financial management systems with acknowledged weaknesses, a lack of systematic processes to ensure accurate data entry, failure to use actual data when it was available, and improperly categorized costs. Therefore, we are unable to ensure that DOD’s reported obligations for GWOT are complete, reliable, and accurate. Consequently, the gaps we identify between supplemental appropriations and DOD’s reported obligations may not reliably reflect true differences between supplemental appropriations and obligations and therefore should be considered approximations. Despite the uncertainty about the obligation data, we are reporting the information because it is the only data available on overall GWOT costs and the only way to approach an estimate of the costs of the war. Also, despite the uncertainty surrounding the true dollar figure for obligations, these data are used to advise the Congress on the cost of the war. As such, obligation data provided in this report reflect DOD reported obligations, however unreliable those reports may be. To determine actions taken by DOD and the services to cover any identified gaps between reported obligations and supplemental appropriations for GWOT, we held discussions with DOD representatives from the Office of the Under Secretary of Defense (Comptroller) and the Army, Air Force, Navy, and Marine Corps. At the major command level, we discussed with service representatives any actions taken to cover gaps and the impacts of actions taken to cover those gaps on their budgeted peacetime operations. We interviewed DOD representatives regarding GWOT obligations and funding for fiscal years 2004 and 2005 in the following locations: Office of the Under Secretary of Defense (Comptroller), Washington, D.C. Department of the Army, Headquarters, Washington, D.C. Army Forces Command and Headquarters, Third Army, Fort McPherson, Georgia. Army Installation Management Agency, Arlington, Virginia. Army Materiel Command, Fort Belvoir, Virginia. Army Pacific Command, Fort Shafter, Hawaii. Department of the Air Force, Headquarters, Washington, D.C. Air Force Air Combat Command, Langley Air Force Base, Virginia. Air Force Air Mobility Command, and Headquarters, U.S. Transportation Command, Scott Air Force Base, Illinois. Department of the Navy, Headquarters, Washington, D.C. Navy Atlantic Fleet Command, Norfolk Naval Base, Virginia. Navy Pacific Fleet Command, Pearl Harbor, Hawaii. Marine Corps, Headquarters, Washington, D.C. Marine Corps Forces, Pacific, Camp Smith, Hawaii. We performed our work from November 2004 through August 2005 in accordance with generally accepted government auditing standards.
To assist the Congress in its oversight role, GAO is undertaking a series of reviews on the costs of operations in support of the Global War on Terrorism (GWOT). In related work, GAO is raising concerns about the reliability of the Department of Defense's (DOD) reported cost data and therefore is unable to ensure that DOD's reported obligations for GWOT are complete, reliable, and accurate. In this report, GAO (1) identified funding for GWOT in fiscal years 2004 and 2005, (2) compared supplemental appropriations for GWOT in fiscal year 2004 to the military services' reported obligations, and (3) compared supplemental appropriations for GWOT in fiscal year 2005 to the military services' projected obligations. In fiscal years 2004 and 2005, DOD received funding for GWOT through both funds included in its annual appropriation and supplemental appropriations. In fiscal years 2004 and 2005, the military services received about $52.4 billion and $62.1 billion, respectively, in supplemental appropriations for GWOT (1) military personnel and (2) operation and maintenance expenses. The Army, Air Force, and Navy also received in their annual appropriations a combined $7.9 billion in fiscal year 2004 and a combined $7.6 billion in fiscal year 2005, which DOD described as being intended to support GWOT. The military services absorbed the increase into their annual appropriations and allocated it based on their judgment of where the funds were most needed. DOD's accounting systems, however, do not separately identify these additional appropriations, and there are no reporting requirements for DOD to identify to which appropriation accounts the funds were allocated; consequently, the military services have lost visibility over these funds and do not know the extent to which they are being used to support GWOT. Despite having asked for the increase to support GWOT, DOD is not explicitly counting these additional funds when considering the amount of funding available to cover GWOT expenses. For fiscal year 2004, regarding supplemental appropriations for GWOT military personnel expenses, the Navy and Marine Corps reported more in obligations than they received in supplemental appropriations, while the Army and Air Force received more in supplemental appropriations than their reported obligations. Each of the services reported more in GWOT operation and maintenance obligations than it received in supplemental appropriations. To cover the differences (gaps), DOD and the services exercised a number of authorities provided them, including transferring funds and reducing or deferring planned spending for peacetime operations. However, in considering the amount of funding available to cover the gaps, DOD did not explicitly take into account the funds provided through its annual appropriation that as previously noted it described as for the support of GWOT. If DOD had considered these funds, it could have reduced the Army's GWOT gap and eliminated the GWOT gaps of the Air Force and Navy. For fiscal year 2005, the services' forecasts of GWOT obligations for the full fiscal year as of June 2005 suggest a potential gap of $500 million for military personnel for the Air Force and potential gaps of about $2.7 billion and about $1 billion, respectively, for operation and maintenance for the Army and Air Force. To cover expenses, DOD and the services again plan to take a variety of actions, including reprogramming funds and reducing or deferring planned spending. However, DOD is again not explicitly considering the funds provided through its annual appropriation, which it described as for the support of GWOT. If counted in fiscal year 2005, the amounts potentially could reduce the Army's and eliminate the Air Force's GWOT gaps and eliminate the need for reprogramming funds and reducing or deferring planned spending.
The Corps maintains the navigation for over 25,000 miles of inland and intracoastal waterways and channels and more than 900 ports and harbors across the United States. The accumulation of sediment in these waterways—known as shoaling—reduces navigable depth and width and, without dredging, may result in restrictions on vessels passing through the waterways. These restrictions often apply to the vessels’ draft—the distance between the surface of the water and the bottom of the hull— which determines, in part, the minimum depth of water in which a vessel can safely navigate. Draft restrictions may result in delays and added costs as ships may need to off-load some of their cargo to reduce their draft, wait until high tide or until waterways are dredged, or sail into another port. These restrictions are imposed at times on various waterways throughout the United States due to shoaled conditions, which could disrupt the shipment or delivery of millions of dollars’ worth of cargo, according to Corps documents and officials. Maintenance dredging needs across these waterways vary significantly, with the majority of dredging occurring along the Atlantic and Gulf Coasts, according to Corps officials. A variety of dredge vessels and other supporting equipment are used for dredging, with variation in their sizes and capabilities, and the conditions under which they best perform. For example, mechanical dredges excavate and remove material by applying mechanical force to the material by means of an implement such as a bucket on the end of a cable suspended from a crane, and deposit the material on a barge for transportation to a placement site. Dustpan and cutterhead dredges, in contrast, are hydraulic dredges that use a pump and either a cutterhead or high-pressure water jets to erode material and remove it from the bottom of a waterway, and then transport the dredged material through a pipeline to a placement site. One of the largest dredge types, the hopper dredge, is a self-propelled ocean-going vessel that hydraulically dredges material and places it into the hold or “hopper” of the ship, where the material is stored while being transported to a placement site where the material may be released from the dredge into open water or pumped to a placement site. Dustpan and cutterhead dredges may work in shallower waterways and have the ability to maneuver in river traffic, whereas hopper dredges perform much of the dredging work in ports, harbors, and other coastal channels and waterways exposed to the ocean. Corps headquarters and its 8 regional division offices generally provide guidance and policy oversight to 38 district offices located throughout the United States (see fig.1). District offices are generally responsible for managing dredging projects located within their district boundaries, including planning, awarding, and administering maintenance dredging contracts with industry. The Corps owns and operates a small fleet of dredge vessels, but it relies mostly on contracts with industry for its maintenance dredging work. According to Corps officials, the Corps typically solicits fixed-price competitive bids from contractors. To help evaluate contractor bids, Corps district offices are to develop an independent government cost estimate for each contract solicitation. The estimates are to be developed using information on the costs of owning and operating dredges—such as acquisition, fuel, labor, and shipyard costs—along with the project information for which the dredging is needed—including the amount and type of material to be removed, the distance from the dredging site to a placement site, and other factors that affect productivity such as environmental requirements. In soliciting bids from contractors, Corps districts have most commonly used a sealed-bid process, resulting in a fixed-price contract between the Corps and the contractor, with the contract generally awarded to the lowest responsible bidder with a responsive bid that is no more than 25 percent above the government cost estimate. Corps officials noted that if the Corps uses a solicitation type other than sealed bidding, Corps districts generally have flexibility in determining the specific contract type to employ for their projects, and may choose other types, such as an indefinite delivery, indefinite quantity contract. An indefinite delivery, indefinite quantity contract is a type of delivery contract that provides for an indefinite quantity of supplies or services within stated limits, during a fixed period. The basic cost components of a maintenance dredging contract generally include (1) mobilization of the dredge and related equipment to the dredging site; (2) utilization of the dredge and related equipment to conduct the dredging, as well as other project-specific activities required under the contract, such as environmental monitoring; (3) transport of the material to a placement site, which can include among others, open water placement sites, confined placement facilities, or beneficial use sites, such as for building a wetland or renourishing a beach; and (4) demobilization of the dredge and related equipment. Each dredging project is unique and a number of factors influence the cost of these components across projects, including the type and quantity of material to be dredged, allowable locations for placement of material, timing, environmental requirements, and the location and weather conditions where dredging occurs. Much of the maintenance dredging the Corps undertakes is cyclical in nature, with dredging needed annually or every few years, according to Corps officials. A limited number of companies have conducted the majority of maintenance dredging contracted by the Corps. Industry data provided by the Dredging Contractors of America indicate that nationwide, during fiscal years 2004 through 2013, an average of about 50 companies were awarded one or more dredging contracts by the Corps annually, though over 50 percent of the contracts, on average, were awarded to 8 companies. According to Corps and industry information, the ownership and operating costs of dredges often require large capital outlays to cover fixed costs such as equipment, insurance, and depreciation, as well as variable costs such as payroll for crews, fuel, and equipment repairs and upgrades—and therefore it may be difficult for companies to quickly enter the dredging market. Through its dredging database, the Corps maintains data on its dredging projects, including all maintenance contracts. Information in the database is used for a variety of purposes, including tracking anticipated and actual project scheduling information, and tracking information across contracts on anticipated and actual contract costs and quantities of material dredged. For each contract, the dredging database includes data elements to capture information on the project name, status, dredging location, government cost estimate, type of contract, type of dredge used, number of bidders, winning bidder, bid amounts, estimated quantities of material dredged, and final contract costs and actual quantities of material dredged after the contract is complete. The database also contains data elements for specific cost components, such as mobilization and demobilization costs, as well as data on the location and types of placement sites used. District offices are responsible for entering data into the database for the contracts they manage, and the database is overseen by Corps headquarters. Cost data in the Corps’ dredging database are unreliable and, therefore, the total costs of maintenance dredging contracts during fiscal years 2004 through 2013 are unclear, but Corps officials report that multiple factors likely contributed to cost changes during this period. The Corps relies on data from its dredging database for assessing trends in maintenance dredging contract costs over time, among other things, but we found that many of the records in the database did not contain information on final costs or actual quantities of material dredged. Corps headquarters officials said they review some data in the dredging database monthly and generally notify district offices when they identify errors or omissions, but corrections may not always be made by the districts. We found that Corps districts do not have systematic quality control measures in place to ensure the data are complete and accurate, but rather the district offices have taken various approaches to entering cost and cost-related data into the database. Through our interviews with Corps officials and review of a sample of projects, we found that multiple factors—such as the level of competition for contracts and the need to comply with environmental requirements—likely contributed to changes in maintenance dredging contract costs during the period of our review. The total cost of maintenance dredging contracts during fiscal years 2004 through 2013 are unclear because data in the dredging database are unreliable. Specifically, of the 1,405 contract records in the database that were marked as “complete,” we found that about 19 percent (264 out of 1,405) did not contain information on the final contract costs or the actual quantity of material dredged. In addition, for those 1,141 contract records marked complete that had final contract cost and actual quantity information entered, we found instances where other related contract information was incomplete, including the following: About 20 percent (224 out of 1,141) of the records did not contain a contract number, contractor identification number, or contract award date, raising questions about the validity of these records overall. About 7 percent (75 out of 1,141) of the records did not have costs for mobilization and demobilization specified, and it was not clear whether these cost components may have been entered into the database. We also identified anomalies that raised questions about the accuracy of some of the cost and cost-related information in the database. Specifically, in analyzing the data to determine the cost per cubic yard of dredging during fiscal years 2004 through 2013, we found wide variation, with the cost per cubic yard ranging from $0.03 to $1,736, with an average cost of $16.08 across the 1,141 records marked as complete and with final contract cost and actual quantity information. In comparison, through its analysis of dredging costs, the Corps has reported that, over this same period, the cost of maintenance dredging—which included work conducted by both Corps-owned dredges, as well as through contracts— was an average of $4.12 per cubic yard. In further examining the cost data in the dredging database, we identified several contract records that could contain incorrect information, potentially explaining the wide variation in the cost per cubic yard across the 10-year period and potentially skewing the average cost per cubic yard, including the following: One contract record showed the Corps paying a final contract cost of almost $1.1 million for about 3,900 cubic yards of material dredged, at a cost per cubic yard of $282. Upon further review of notes contained within the database for the record, however, we found that the quantities listed in the record likely represented the number of hours the dredge operated, rather than cubic yards of material dredged. Another contract record indicated that the final contract cost was $875,104 for 504 cubic yards of material dredged, or $1,736 per cubic yard dredged—more than 400 times the average cost per cubic yards for other complete records in the database. One contract record indicated that a contractor bid $1.1 million to dredge 2,258 cubic yards, at a cost of $487 per cubic yard. The final contract cost entered, however, indicated that $2,484 was paid for dredging 2,258 cubic yards of material, or about $1.10 per cubic yard, calling into question the accuracy of the cost amounts entered for this record. Corps headquarters officials said they have taken several steps to encourage the district offices to enter complete and accurate information into the dredging database, but they acknowledged that updates or corrections may not always be made by the district offices. The Corps’ dredging database user guide provides detailed instructions for what information should be entered for each data element in the database at the different points along the contracts’ development and execution. Corps headquarters officials told us that they run monthly database queries designed to test for errors and omissions across various data elements and that they may notify individual district offices via e-mail regarding incomplete information. Headquarters and division officials said that they also emphasize the importance of the data to districts before national and regional dredging meetings and send out e-mail reminders or contact district offices by phone asking them to ensure dredging data are updated before these meetings take place. Headquarters and division officials further explained, however, that it is the district offices that are responsible for entering and maintaining data in the dredging database for their respective contracts. Headquarters officials said they generally check to see if updates they request are made, but the officials emphasized that the responsibility for making updates resides with the district offices, and that updates may not always be made by the district offices. In discussing the dredging database with Corps division and district officials, we found that the district offices have taken various approaches to entering cost and cost-related data into the database. The dredging database user guide specifies how contract-related information is to be entered, but the Corps does not have agency-wide guidance specifying steps the districts should take to verify and ensure the completeness and accuracy of the data. Officials from most of the 12 district offices we spoke with said that they assign one person to enter data into the dredging database and that having a single person enter all the data is an important quality control step and helps ensure that data are entered in a consistent manner. On the other hand, officials from 4 district offices said they have the data reviewed by someone else to verify the data’s completeness and accuracy. In addition, officials from 5 of the 12 district offices we spoke with said that entering cost data into the database has not been a high priority because they use other systems or methods to maintain cost data for the contracts they manage. For example, officials from 4 district offices told us they maintain spreadsheets to track cost and other related information for the projects they manage in their district; according to these officials, these spreadsheets allow them to maintain detailed information in a more accessible and user-friendly manner than the information in the dredging database. Moreover, officials from 7 district offices told us that they primarily use the database for planning and scheduling upcoming dredging work, and thus entering scheduling information when preparing a solicitation for a contract may be a higher priority than entering in final costs and quantities when the contract is complete. Corps headquarters officials said that, based on their observations of dredging database records, district offices have made improvements in entering information into the database over the last several years, but they acknowledged that some of the data may be of limited quality. Officials told us that having complete and accurate data in the database— including data on final contract costs and actual quantities of material dredged—is important for managing contract costs over time, and that they rely on data in the database to assess various trends. For example, officials stated that they utilize data from the dredging database to assess how the numbers of bids may be influencing the prices bid by contractors, how government cost estimates compare with bid prices, how final contract costs compare with government cost estimates or bid prices, and the extent to which there may be patterns or unexplained variations in the cost of dredging on a per cubic yard basis over time. One headquarters official further said that the Corps continuously looks for ways to increase competition for its maintenance dredging contracts and therefore seeks data to help understand factors affecting competition. For example, headquarters officials said they review scheduling data in the database on a weekly basis to try to help increase the number of contractors available to bid on upcoming work, which could in turn encourage lower contract bid prices. Federal internal control standards indicate that managers should maintain quality information, including accurate and complete operational and financial data, for the effective and efficient management of their operations. The Department of Defense’s Financial Management Regulation also requires that relevant and reliable information related to program costs be provided to program managers so that management can use the information for decision making. Without systematic quality controls at the district-office level to regularly verify the completeness and accuracy of their maintenance dredging contract data, the Corps risks undertaking analyses on incomplete information, and may be drawing conclusions about cost trends based on unreliable information. Furthermore, without complete information, the Corps may be missing opportunities to identify cost elements contributing to contract costs, changes in costs over time, or other factors important to the management of maintenance dredging contracts. Through our interviews with Corps officials and review of a sample of dredging projects, we found that multiple factors likely contributed to changes in contract costs during fiscal years 2004 through 2013. Corps officials across many of the headquarters, division, and district offices we spoke with, as well as representatives from the dredging industry, said that during this period they believed the cost of dredging had increased for many maintenance projects. Factors that Corps officials we interviewed commonly cited as likely contributing to changes in contract costs over the 10-year period of our review included the following: Weather conditions and other natural events, such as hurricanes, greatly influence the location, type, and volume of material that may need to be dredged from one dredging cycle to the next, which may affect the size and scope of the work and in turn the total cost of the contract. Federal funding available may affect the amount of dredging to be performed for particular projects, and reducing the scope of maintenance projects may contribute to higher costs on a per cubic yard basis for some contracts because dredging smaller volumes of material may result in less efficient use of dredge equipment given the fixed costs associated with maintaining and operating dredge equipment. Labor, fuel, and steel prices may represent a large portion of the cost to a contractor in conducting dredging, and fluctuations in the market prices for these costs may influence contractors’ bids for contracts. Competition—the number of contractors available to bid on and conduct the work—may also affect bid prices and during times when there is a high demand for dredging, the number of contractors available to bid on work may be limited, which could in turn lead to higher bid prices. Material placement costs, which are influenced by nature of the material, the type of placement method used, and the location where the material is placed, may affect contract costs with farther placement sites generally more costly because of additional time, fuel, and equipment needed to transport the material. Environmental requirements and dredging windows—requirements that specify the time of year when dredging may occur at a particular location—may affect contract costs such as by requiring the use of enhanced dredging equipment or other equipment, such as trawlers to monitor for sea turtles or other threatened or endangered species; restricting dredging to certain times of the year when contractor availability may be limited; or requiring contractors to conduct work during times of the year when conditions may be more severe, potentially making dredging operations more dangerous and less efficient. In general, Corps officials we interviewed said it is difficult to discern which of these various factors may have led to specific cost increases for a particular contract. For example, officials from several districts we spoke with said that dredging windows have limited their ability to schedule work to maximize contractor availability, resulting in fewer bids and higher bid prices for some contracts. Additionally, some district officials told us that dredging windows have also led to dredging during times of the year when weather conditions have made dredging more dangerous or more difficult, increasing the risk to contractors, which in turn may have contributed to higher bid prices. Officials further explained, however, that though these factors likely influenced changes in contract costs, they could not determine by how much. However, in one instance, Corps officials identified how certain factors led to cost increases for a particular contract. Specifically, for one project we reviewed, contract costs rose when the traditional placement site reached capacity in 2011, and the new state-run placement site that the Corps began using levied a fee, on a per cubic yard basis, for material placed there. This fee added an average of about $8 per cubic yard of material to the annual dredging contract starting in fiscal year 2012, resulting in an increase of more than $2 million to the total cost of the contract that year. Officials from Corps district offices we spoke with reported undertaking various approaches to manage maintenance dredging contract costs, largely on a project-by-project basis. Corps officials explained that, because each dredging project is unique, a one-size-fits-all approach for developing and executing contracts cannot be taken. Rather, district offices have the flexibility to manage their dredging contracts, including taking various approaches to manage costs. Several Corps officials noted that identifying approaches for managing their maintenance contracts has been especially important over the last several years because of increases in costs, as well as flat or reduced funding for some projects. We found that the district offices commonly cited approaches relating to combining contracts, using alternative contract types, and changing the specifications of the contract. Corps officials from 11 of the 12 district offices we interviewed said that they have combined work under one or more projects that had historically had separate contracts into a single contract in an effort to manage costs. Combining contracts can result in reduced administrative, mobilization, and demobilization costs and, in some instances, a lower unit price per cubic yard, according to Corps officials. The officials explained that, in general, the larger the quantity of material included in a contract, the lower the price may be on a per cubic yard basis because contractors are able to spread out their fixed costs. For example, since fiscal year 2012, Corps districts on the West Coast have combined some of their hopper dredging work into one regional contract. Contractors with hopper dredges primarily work on the East and Gulf Coasts and mobilizing a hopper dredge from those areas for dredging on the West Coast can be costly given the distance the dredge must travel, according to Corps officials. Officials estimated that combining the hopper dredge work across projects from several West Coast districts saved up to $7 million annually by having a single hopper dredge mobilize and demobilize once instead of multiple dredges for individual contracts. In another district on the East Coast, in fiscal year 2013, the district combined into one contract the dredging for a coastal storm damage reduction project with a nearby maintenance dredging project, with officials estimating that the cost per cubic yard and mobilization costs—about $1.5 to $2 million—were less than what they may have been had the work been completed under two separate contracts. Before combining contracts, Corps district officials said they consider a variety of factors—such as contracting regulations and requirements, the nature of the project, dredging windows and other timing needs, allowing opportunities for small businesses to bid on the work, and availability of funding—and that combining contracts may be feasible in limited instances. For example, because additional planning may be needed, it may not be feasible to combine contracts for projects with time-sensitive needs, according to the officials. Some Corps district officials noted that 2013 revisions to Department of Defense contracting regulations have affected the process for combining some contracts. Under the revisions, if the total combined value of the contract is $2 million or above, the Corps districts must have, among other things, an acquisition strategy that includes market research, identifies alternative contracting approaches, and obtains approval for the contract from a division-level senior procurement executive. Previously, approval for combining contracts was not required at the division level unless the contract value was at least $6 million. Some district officials told us that these additional steps can add to the contract preparation time and review process and, as a result, may preclude districts from combining contracts for projects with time-sensitive dredging needs. In conjunction with combining contracts, some Corps district officials said that they have shifted from using fixed-price contracts to employing alternative contract types to help manage contract costs. For example, officials from a Gulf Coast district told us that, since fiscal year 2012, they have employed an indefinite delivery, indefinite quantity contract to help manage the costs of maintenance work in their district, instead of multiple fixed-price contracts. According to the officials, this contracting type provided flexibility related to the amount of material that could be dredged under the contract, as well as the timing of when dredging could occur. The district officials explained that given the dynamic nature of some of their projects, it was challenging to identify specific quantities and locations of material to be dredged, information that is required in advance of planning and executing a fixed-price contract. District officials said that using an indefinite delivery, indefinite quantity contract allowed the district to issue task orders for dredging needs as they arose across areas specified in the contract because, under the terms of the contract, a contractor would be available to conduct dredging as needed during the period outlined in the contract. By combining the district’s work into one indefinite delivery, indefinite quantity contract, district officials estimated saving approximately $670,000 in mobilization and demobilization costs annually because of the need to pay for these costs under one contract, instead of for three individual contracts. Other district officials told us they have begun using multiple award task order contracts, in part, to help manage contract costs. Under multiple award task order contracts, officials said they can have a contractor undertake needed maintenance dredging quickly because, under this contracting type, contractors are preapproved and, once approved, can bid on maintenance work in a more streamlined manner than the solicitation process generally followed under a typical fixed-price contract. Officials in a Corps district on the East Coast said that, after the 2004 and 2005 hurricane seasons, working under a fixed-price contract—which generally takes about 45 days to solicit bids and identify a winning bidder—did not allow them to quickly respond to the substantial time- sensitive dredging needs that the hurricanes had caused. The district then decided to begin combining dredging for some of its projects into multiple award task order contracts, which provided them flexibility in scheduling the work and, according to the officials, reduced the time needed to award a contract by about 30 days. District officials estimated that by combining dredging from 17 projects into 7 multiple award task order contracts over the 3-year period covering fiscal years 2010 through 2012, they reduced the mobilization and demobilization costs for the work by approximately $18.8 million. Some Corps officials and industry representatives we spoke with, on the other hand, said there are trade-offs in using multiple award task order contract types. They explained that, from a contractor’s perspective, multiple award task order contracts may be perceived as more risky than the typical sealed-bid process followed by a fixed-price contract because, among other things, less information may be available to contractors, including information on other bidders and their bid prices. According to Corps officials and industry representatives, higher risk may be reflected in higher bids. Additionally, they said that, under multiple award task order contracts, notification of the winning bidder is not made immediately—as it typically is under a sealed-bid solicitation process—and, therefore, contractors wait to bid on other contracts, potentially affecting their ability to bid on contracts for other dredging work. Several Corps district officials also said that they alter the specifications or extend the time frames of maintenance dredging contracts, where feasible, to manage costs. For instance, Corps officials from a few districts said that, in specifying the dredging requirements of a project, they may emphasize performance requirements and not necessarily the type of equipment needed to achieve those requirements. Officials in a Gulf Coast district said that, for one maintenance contract in fiscal year 2013, they did not specify a required dredge type in the solicitation. The officials explained that because of the lower amount of material to be dredged that year compared with past years, there was flexibility related to the type of dredge that could be used, and by opening up bid solicitations to contractors with multiple dredge types, a lower bid price could result from the potentially higher number of bidders. A contractor with a pipeline dredge had been used over the preceding 10 years but, in fiscal year 2013, a contractor with a hopper dredge—a dredge type that district officials said could operate at a lower cost than a pipeline dredge for that project—was awarded the contract for about $2 million less than past contracts. In other instances, Corps district officials said they have used multiyear contracting to conduct dredging work over more than one dredging cycle. Officials in a Pacific Northwest district told us that in past years, they awarded single-year maintenance dredging contracts for one project that needs annual dredging. Since fiscal year 2008, district officials said they employed a 1-year contract, but with the option to extend it up to 2 additional years. Structuring the contract in this way provided the district the ability to change contractors if the current contractor was performing poorly, by not exercising the next year’s option. District officials were not able to estimate specific savings from this approach, but they said that extending the contract to 3 years stabilized the mobilization and demobilization costs because the contractor kept the dredge equipment in the area to carry out the entire contract, though keeping the equipment in the area was not a contract requirement. Officials from this district also noted, however, that multiyear contracts carry more risk for contractors because the contractors have to forecast fuel prices and other costs for the duration of the contract, which can in turn lead to higher bid prices than if the contract was for a single year. In addition, Corps officials across all the offices we spoke with said they share lessons learned and seek opportunities to learn about approaches that might help them better manage contract costs through a variety of formal and informal coordination efforts. Several Corps district officials said they participate in regional dredging teams that meet on a weekly, monthly, or quarterly basis where they discuss dredging schedules, contracting approaches, and dredging techniques and technologies, among other things. Districts that dredge the Mississippi River, for example, participate in a regional dredging team where they meet weekly to discuss the scheduling of some of their respective projects and to combine work where feasible. Corps headquarters also holds annual national dredging meetings, both internally and with industry, and a number of Corps district offices we spoke with said these meetings present regular opportunities to share or learn about cost-effective approaches others may be taking. Additionally, officials from several Corps districts said that for some projects—especially those that may be more complex or less routine in nature—they invite industry contractors to meet with them to discuss upcoming dredging needs. For example, officials from one East Coast district office said the district has held “industry days” since 2012 in advance of soliciting contracts for annual maintenance dredging in a harbor that includes multiple inner channels, to obtain industry input on structuring the order of dredging and material placement so as to efficiently complete dredging needs across these channels, among other things. Dredging is a vital part of keeping the nation’s ports, harbors, and other waterways open for safe and efficient navigation and for the passage of import and export cargo crucial to commerce. The Corps removes millions of cubic yards of material from these waterways annually, relying mainly on contractors to do this work. Over the past decade, the Corps has reported that the cost of dredging activities has risen while the amount of material dredged has fallen. Recognizing the need to dredge efficiently, the Corps has reported taking some approaches, such as combining contracts, to manage the costs associated with maintenance dredging contracts. The Corps uses data from its dredging database to assess trends in costs and quantities dredged for its maintenance contracts. The Corps has measures in place at headquarters to review data in the database, but these measures themselves have not been effective in ensuring that the Corps has reliable data. Because Corps district offices are not consistently populating the database, and because the district offices do not have systematic quality controls to regularly verify the completeness and accuracy of their dredging data, the Corps may have an incomplete picture of the costs of its maintenance dredging contracts. As a result, the Corps risks undertaking analyses and making conclusions on unreliable information, and may be missing opportunities to identify factors important to the management of maintenance dredging, such as cost elements contributing to changes in costs over time, or additional areas where it could take further actions to manage costs. To help ensure the completeness and accuracy of cost and cost-related data for maintenance dredging contracts in the Corps’ Dredging Information System database, we recommend that the Secretary of Defense direct the Director of Civil Works of the U.S. Army Corps of Engineers to require that its district offices establish systematic quality controls to regularly verify the completeness and accuracy of their maintenance dredging contract data, including processes for ensuring that corrections are made when errors or omissions may be identified, such as through headquarters reviews. We provided a draft of this report to the Department of Defense for review and comment. In its written comments, reproduced in appendix II, the Department of Defense concurred with our recommendation. It stated that the Corps’ dredging database is not uniquely different from other database systems with challenges achieving data quality and completeness. The Department said that the Corps’ Director of Civil Works will direct district offices to establish systematic quality controls to regularly verify the completeness and accuracy of their maintenance dredging contract data, including processes for ensuring that corrections are made when errors or omissions may be identified through major subordinate commands (i.e., division offices) and headquarters reviews. The Department of Defense also provided technical comments that we incorporated, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Defense, the Director of Civil Works of the U.S. Army Corps of Engineers, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report examines (1) agency data available about the U.S. Army Corps of Engineers (Corps) total costs of maintenance dredging contracts, and factors that contributed to any changes, during fiscal years 2004 through 2013, and (2) approaches the Corps reports it has undertaken to manage maintenance dredging contract costs. For both objectives, we reviewed relevant laws, regulations, and Corps policy and guidance related to maintenance dredging and the development and execution of maintenance contracts. We conducted interviews with, and obtained documentation from, officials from Corps headquarters, 7 division offices, and 12 district offices (out of a total of 8 division and 38 district offices, respectively). We selected this nongeneralizable sample of Corps offices to represent various geographic regions and a range of maintenance dredging work carried out by the districts (relating to estimated numbers of contracts employed and estimated contract costs and quantities of material dredged). We conducted interviews with navigation managers, contracting officials, project managers, engineers, and other officials from the following Corps division and district offices: Division offices: Great Lakes and Ohio River, Mississippi Valley, North Atlantic, Northwestern, South Atlantic, South Pacific, and Southwestern. District offices: Baltimore, Buffalo, Galveston, Jacksonville, Mobile, New England, New Orleans, New York, Norfolk, San Francisco, Seattle, and Wilmington. We also interviewed officials from the Dredging Contractors of America, a national association that represents the dredging industry, as well as industry representatives from five dredging companies that participated in our interviews, about their views on factors that contributed to any changes in maintenance dredging contract costs and on contracting approaches the Corps has undertaken to manage maintenance dredging contract costs. To examine agency data available about the total costs of maintenance dredging contracts, and factors that contributed to any changes, during fiscal years 2004 through 2013, we reviewed dredging data collected for those fiscal years by the Corps through its dredging database, the Dredging Information System, and Corps documentation related to the database, including a database user’s guide and data dictionary. Our analysis included 2,227 contract records labeled in the dredging database as maintenance dredging contracts having a “bid open” date (the date when a bid for a solicitation is opened and the Corps determines whether it can award a contract for a given project based on the bids received) during fiscal years 2004 through 2013. These contract records included maintenance dredging (about 99 percent) and maintenance and construction work combined (about 1 percent). According to the data, 1,405 of these maintenance contracts were completed during fiscal years 2004 through 2013, with an average of approximately 140 contracts completed annually. To assess the reliability of the data elements needed to conduct our review—including final contract costs, actual quantity of dredged material, and other related contract information—we performed electronic testing of the data elements (such as looking for missing values or outliers), reviewed related documentation, and interviewed agency officials knowledgeable about the data. Specifically, we interviewed officials from the Corps headquarters Navigation Data Center who oversee the dredging database, and we interviewed officials from the 12 selected Corps district offices about their offices processes for entering and updating data for their respective maintenance dredging contracts. We concluded that the data were not sufficiently reliable for the purposes of reporting information on total costs and quantities of maintenance dredging contracts. We also explored using other data to determine Corps maintenance dredging contract costs, but we were unable to use other data sources because complete information for all contracts were not available from these sources. Specifically, we sought information from the Federal Procurement Data System-Next Generation, the Corps of Engineers Financial Management System, and the Corps Resident Management System (a system to manage construction contracts). With regard to the Federal Procurement Data System-Next Generation, we obtained data on Corps contracts from fiscal years 2004 through 2013 that were coded as “dredging” and attempted to separate out maintenance-related dredging contracts. However, we were unable to identify a subset of maintenance contracts given the number of dredging contract codes, as well as the varying contract descriptions. In addition, the Corps of Engineers Financial Management System and the Corps Resident Management System did not contain data in such a way that costs for all maintenance contracts could be broken out from other cost information. Additionally, to examine factors that contributed to any changes in contract costs during fiscal years 2004 through 2013, we interviewed the selected Corps division and district offices and reviewed a nongeneralizable sample of four reoccurring maintenance dredging projects. We selected the following projects to reflect geographic variation and a range of contract sizes, based on data from the dredging database on the total estimated cost of the contract and the total estimated quantity of material dredged: Atchafalaya River Basin, Gulf Intracoastal Waterways, and Miscellaneous Project, located in Southern Louisiana; Baltimore Harbor Project, located in Baltimore, Maryland; Lorain Harbor Project, located in Lorain, Ohio; and Palm Beach Harbor Project, located in West Palm Beach, Florida. For each of the projects, we reviewed contract information and other supporting documentation to identify key cost components for the projects and determine to the extent possible how, if at all, various cost components contributed to any changes in maintenance costs for contracts executed across the time period of our review. Specifically, we examined estimated and final contract costs, estimated and final quantities of material dredged, and various cost components in the contracts across different years, such as mobilization, demobilization, and material placement costs. To examine approaches the Corps reports it has undertaken to manage maintenance dredging contract costs, we interviewed officials from Corps headquarters and the selected division and district offices and reviewed related documentation. Specifically, during our interviews across Corps offices, we asked Corps officials to identify approaches they have undertaken to manage maintenance dredging contract costs. We then requested and reviewed supporting documentation when officials identified specific examples of approaches they indicated resulted in cost- effective approaches, including examining reports, studies, memorandums, or other documentation developed to estimate potential cost savings achieved as a result of a particular approach. Information obtained from our interviews with Corps officials and industry representatives and from the projects we reviewed cannot be generalized to those officials, representatives, or maintenance projects we did not interview or review. However, we believe our interviews and review of a sample of projects provided important insights into factors that may have contributed to changes in contract costs over the 10-year period, as well as approaches the Corps has undertaken to manage maintenance dredging contract costs. We conducted this performance audit from June 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual listed above, Alyssa M. Hundrup, Assistant Director; Hiwotte Amare; Arkelga Braxton; Stephanie Gaines; Cindy Gilbert; Richard P. Johnson; Julia Kennon; Michael Krafve; Gerald Leverich; Kirk D. Menard; Mehrzad Nadji; Cynthia Norris; and Tatiana Winger made key contributions to this report.
The Corps maintains the navigation for thousands of miles of waterways and hundreds of ports of harbors. The Corps conducts maintenance dredging primarily under contract with private industry to remove sediment from waterways. Maintenance dredging is often cyclical in nature, with dredging needed annually or every few years. GAO was asked to review Corps' maintenance dredging contract costs. This report examines (1) agency data available about the total costs of maintenance dredging contracts, and factors that contributed to any changes, during fiscal years 2004 through 2013, and (2) approaches the Corps reports it has undertaken to manage maintenance dredging contract costs GAO reviewed laws, regulations, and Corps guidance; analyzed cost data from the Corps' dredging database for fiscal years 2004-2013 and assessed the reliability of these data; reviewed a nongeneralizable sample of four projects selected to reflect geographic variation and a range of contract sizes; reviewed documentation on approaches to manage costs; and interviewed Corps officials from headquarters, divisions, and districts (selected for geographic variation and range of dredging work) and dredging industry stakeholders. Cost data in the U.S. Army Corps of Engineers' (Corps) dredging database are unreliable and, therefore, the total costs of maintenance dredging contracts during fiscal years 2004 through 2013 are unclear. In particular, about 19 percent (264 out of 1,405) of the contract records marked as "complete" did not contain information on the final contract costs or the actual quantity of material dredged. The Corps relies on cost data from its dredging database to assess trends in maintenance dredging contract costs over time, among other things, but its district offices do not have systematic quality control measures in place to ensure these data are complete and accurate. Federal internal control standards indicate that managers should maintain quality information, including accurate and complete operational and financial data, for the effective and efficient management of their operations. Without systematic quality controls at the district-office level to regularly verify the completeness and accuracy of their maintenance dredging contract data, the Corps risks undertaking analyses on incomplete information, and drawing conclusions about cost trends based on unreliable information. Multiple factors likely contributed to changes in contract costs during fiscal years 2004 through 2013, according to Corps officials. Corps officials, as well as representatives from the dredging industry, told GAO that during this period they believed the cost of dredging had increased for many maintenance projects. However, Corps officials said that it is difficult to discern which factors may have led to specific cost increases for a particular contract given the many factors that influence the cost of a contract. Factors that Corps officials commonly cited as likely contributing to changes in contract costs over the 10-year period included the number of contractors available to bid on the work; fluctuations in the market prices for labor, fuel, and steel; and the costs for transporting dredged material to a placement site, with farther placement sites generally being more costly because of additional time, fuel, and equipment needed to transport the material. Corps districts reported undertaking various approaches to manage maintenance dredging contract costs, largely on a project-by-project basis because of the unique nature of each project. For example, officials from 11 of 12 Corps district offices interviewed said they have combined work under one or more projects that had historically had separate contracts into a single contract to help manage costs. In combining contracts, Corps district officials estimated reducing total mobilization costs—the costs to transport dredge equipment—based on the need to mobilize dredge equipment once under a combined contract, instead of multiple times for individual contracts. For example, Corps officials estimated that combining dredge work across projects from several West Coast districts saved up to $7 million annually in mobilization costs. Corp officials pointed out, however, that combining contracts may not always be feasible, such as when projects have time-sensitive dredging needs. Additionally, officials from a few district offices said that, in specifying the dredging requirements for a project, they may emphasize performance requirements and not necessarily the type of equipment needed to achieve those requirements, which may result in an increase in the number of contractors available to bid on the work and, therefore, more competitive bids. GAO recommends that the Corps require that its district offices establish systematic quality controls to regularly verify the completeness and accuracy of maintenance dredging contract data. The Department of Defense concurred with the recommendation.
In 1995, we reported on management and technical weaknesses with IRS’ tax systems modernization that jeopardized its successful completion and made over a dozen recommendations to correct the weaknesses. Because of the seriousness of the weaknesses, we placed the modernization on our 1995 list of high-risk federal programs. In June 1996, we reported that IRS had made progress in implementing our recommendations. However, to minimize the risk of IRS investing in systems before the recommendations were fully implemented, we suggested that the Congress limit IRS’ information technology (IT) spending to certain cost-effective categories. These spending categories were those that (1) support ongoing operations and maintenance, (2) correct pervasive management and technical weaknesses, such as a lack of requisite systems life cycle discipline, (3) are small, represent low technical risk, and can be delivered in a relatively short time frame, or (4) involve deploying already developed systems that have been fully tested, are not premature given the lack of a complete systems architecture, and produce a proven, verifiable business value. The act providing IRS’ fiscal year 1997 appropriations limited IRS’ IT spending to efforts consistent with these categories. In 1997, we again included the modernization on our high-risk list because IRS had not yet implemented our recommendations. However, we also reported that IRS had made progress on the recommendations. For example, in May 1997, IRS issued its modernization blueprint. This blueprint consisted of four principal components: (1) a systems life cycle, (2) business requirements, (3) functional and technical architectures, and (4) a sequencing plan. We briefed IRS appropriations and authorizing committees on the results of our assessment of IRS’ Modernization Blueprint in September 1997. In those briefings and in a subsequent report, we concluded that the Modernization Blueprint was a good first step that provided a solid foundation from which to define the level of detail and precision needed to effectively and efficiently build a modernized system of interrelated systems. However, we also noted that the blueprint was not yet complete and did not provide enough detail for building and acquiring new systems. As a result, IRS’ fiscal year 1998 appropriations act again limited IRS’ fiscal year spending to efforts that were consistent with the aforementioned spending categories. The act providing IRS’ fiscal year 1999 appropriations continued these spending limitations. In its fiscal year 1998 and 1999 budget requests, IRS requested over $1 billion for its ITIA account, and the Congress provided $506 million for the account. Specifically, it appropriated $325 million in fiscal year 1998, $30 million of which was rescinded in May 1998 for urgent Year 2000 requirements. The Congress also provided $211 million in fiscal year 1999. In providing these sums, the Congress limited IRS’ ability to obligate them until IRS and the Treasury submitted to the Congress for approval an expenditure plan that, as stated in the law, (1) implements the IRS Modernization Blueprint, (2) meets OMB investment guidelines, (3) is reviewed and approved by IRS’ Investment Review Board, OMB, and Treasury’s IRS Management Board and is reviewed by GAO, (4) meets requirements of IRS’ life cycle program, and (5) is in compliance with acquisition rules, requirements, guidelines, and systems acquisition management practices of the federal government. IRS is not requesting any ITIA funds for fiscal year 2000 but is asking for $325 million for fiscal year 2001. In our April 1999 testimony, we reported this request was not adequately justified and suggested that the Congress not provide the funds until IRS provided the support. In December 1998, IRS awarded its Prime Systems Integration Services (PRIME) contract for systems modernization. According to IRS, it planned to “partner” with the PRIME contractor, among other things, to (1) complete the modernization blueprint, as we recommended, and (2) account for changes in systems requirements and priorities caused by IRS’ organizational restructuring, new technology, and IRS Restructuring and Reform Act of 1998 requirements. In addition, IRS stated that it planned to establish disciplined life cycle management processes and structures and mature software development and acquisition capabilities before it begins building modernized systems. Because of the modernization’s high cost and importance, we continued in 1999 to categorize it as a high-risk federal program. To comply with its statutory mandate to submit an expenditure plan to the Congress before obligating ITIA funds, IRS has developed a strategy where, in lieu of a single plan, it intends to develop and provide to the Congress a series of expenditure plans over the life of the modernization. This expenditure plan strategy is a by-product of the Commissioner’s overall approach to the modernization, which is to incrementally invest in modernized systems in accordance with (1) rigorous systems and software life cycle management processes and (2) a revised sequencing plan for migrating from IRS’ legacy systems and master file environment to the target systems and relational database environment specified in the blueprint. The initial plan requests $35 million for IRS modernization initiatives to be delivered by October 31, 1999. This plan proposes three categories of modernization investments that IRS calls (1) supporting business goals, (2) building management capability, and (3) planning a modern infrastructure, and is requesting for each category $17 million, $11.6 million, and $6.5 million, respectively. The supporting business goals initiatives include the early phases of selected systems development efforts that are intended to improve taxpayer service by the year 2001 tax filing season. The building management capability initiatives provide for defining and beginning the institutionalization of mature modernization management and systems engineering processes that are to permit effective blueprint implementation. The planning modern infrastructure initiatives refer to the first steps in establishing the technology foundation (e.g., networks, operating platforms, system security, etc.) upon which to build, interconnect, and operate modernized system applications. IRS’ stated intention is to submit to the Congress a series of expenditure plans in the future, the next being in October 1999. According to IRS, the October 1999 plan will define follow-on modernization initiatives, deliverables, and funding requirements into the year 2000. Leading public and private sector organizations use an incremental approach to investing in systems modernization efforts. In addition, the Clinger-Cohen Act and OMB policy endorse this approach to funding large system development investments. Using this approach, organizations take large, complex modernization efforts and break them into projects that are narrow in scope and brief in duration. This enables organizations to determine whether a project delivers promised benefits within cost and risk limitations and allows them to correct problems before significant dollars are expended, which in turn mitigates the risk of program failure. IRS’ initial expenditure plan is an appropriate first step to successful systems modernization and, with regard to the $35 million being requested for this increment, satisfies the conditions that the Congress placed on the use of ITIA funds. The key to IRS’ success is now to effectively implement the initiatives described in its initial expenditure plan and fulfill its commitment to incrementally request and expend future modernization funds. IRS’ initial expenditure plan lays the foundation for blueprint implementation on an incremental basis and begins the implementation process for selected modernization initiatives. For example, the expenditure plan only requests funds to establish and selectively implement an Enterprise Life Cycle (ELC). This ELC is to provide IRS with a disciplined and institutional approach for managing its IT investments throughout their life cycle--from conception, development, and deployment through maintenance and operation. This ELC is to be an adaptation of the PRIME contractor’s commercially available and proven systems life cycle management approach and associated automated tools, incorporating IRS- unique needs such as key investment decision points. Once in place at IRS, the service plans to begin implementing the ELC on its ongoing modernization initiatives. According to IRS, future expenditure plans will provide for ELC implementation on all future project initiatives. As another example, the initial expenditure plan requests funds to add missing system architecture precision and detail to selected system initiatives. In our February 1998 report, we concluded that while the architecture in IRS’ May 15, 1997, blueprint provided a solid foundation from which to build a complete architecture, it did not provide sufficient detail and precision for building or acquiring new systems. For example, the architecture did not allocate business requirements to specific configuration items (i.e., actual hardware and software components). As part of its initial expenditure plan, however, IRS plans to validate existing business requirements and develop preliminary hardware and software design specifications for IRS’ ongoing projects. Additionally, IRS intends for future expenditure plans to incrementally provide for architectural specificity for future system initiatives. The initial expenditure plan also requests funds for IRS to perform business system planning, which is to result in a revised modernization sequencing plan by October 31, 1999. This initiative is necessary because the May 15, 1997, blueprint sequencing plan does not recognize, for example, the need to introduce electronic tax administration technologies and capabilities early in the modernization to respond to the electronic filing requirements in the IRS Restructuring and Reform Act of 1998. This revised sequencing plan is to define the general timing, costs, and benefits of future modernization projects, and is to be incrementally updated in future expenditure plans with more specific cost and benefit information as projects are initiated and business case justifications are developed. If properly implemented, the ELC that IRS’ initial expenditure plan is to establish and selectively implement, should meet OMB information system investment guidelines. These guidelines call for agencies to adopt a data- driven, analytically based approach to selecting, controlling, and evaluating investments in information technology. The overriding objective is to ensure that investment decisions are made in a disciplined and rigorous manner on the basis of established criteria, such as return-on-investment and architectural compliance, and that system investments be broken into a series of increments. Consistent with these guidelines, IRS’ ELC is to include processes for identifying alternative solutions, calculating their projected returns-on-investment, and requiring that selected solutions be architecturally compliant. Through its ELC, IRS also plans to require that systems be acquired and implemented in phased segments that are narrow in scope and brief in duration. According to IRS, system initiatives in future expenditure plans will be conducted in accordance with the ELC. IRS’ blueprint included a high-level system life cycle framework that could be used to define a disciplined set of processes for managing modernization investments. In lieu of using the system life cycle overview contained in the blueprint as the framework for developing life cycle management processes, IRS’ initial expenditure plan provides for establishing the aforementioned ELC. IRS decided to do this because it concluded that adapting the PRIME contractor’s commercially available methodology to meet its needs would be less costly and faster than completing its own unique system life cycle contained in its May 15, 1997, blueprint. IRS officials also stated that the PRIME contractor’s methodology offered more capability than the blueprint system life cycle overview, such as processes for managing business process reengineering. We reviewed the PRIME contractor’s commercially available methodology, and found that it both meets the requirements specified in the blueprint’s system life cycle overview and is consistent with the approaches that successful private and public sector organizations use to manage large IT investments. If implemented correctly, it should provide IRS with effective processes and tools for, among other things, planning, controlling, developing, and deploying information systems based on defined activities, events, milestones, reviews, and products. As described above, the initial expenditure plan provides for implementing the ELC on ongoing projects, and, according to IRS officials, future expenditure plans will provide for implementing it on follow-on projects. IRS’ Core Business Systems Executive Steering Committee, which replaced IRS’ Investment Review Board, approved the $35 million expenditure plan on April 20, 1999. Treasury’s IRS Management Board and OMB approved the plan on June 9, 1999, and June 10, 1999, respectively. On May 13, 1999, IRS provided us with a copy of its initial expenditure plan it submitted to the Congress, and the results of our review are contained in this report. As described in its expenditure plan, IRS plans to establish, through its ELC, the life cycle management processes and practices for acquiring modernized systems. If implemented effectively, these processes should meet federal acquisition rules and management practices. According to federal acquisition laws, rules, and regulations, agencies should, among other things, use disciplined, decision-making processes for planning, managing, and controlling the acquisition of IT. By doing so, agencies mitigate the risks of acquiring systems that are not delivered on time and on budget and do not work as intended. IRS’ expenditure plan requests funds to continue IRS’ efforts to strengthen its capability to effectively manage its contractors. For example, as part of its building management capability initiatives, IRS plans to implement mature software/systems acquisition management practices within the IRS organization responsible for managing the PRIME contractor and other modernization contractors. IRS intends to build the capability in accordance with the Software Engineering Institute’s (SEI) software/system acquisition capability maturity model requirements, and plans to have this capability in place by October 31, 1999. Among these maturity models’ requirements are disciplined and rigorous processes and approaches for measuring and tracking progress of contracts and acting to correct problems quickly, which will be a key to IRS’ ability to effectively manage the PRIME contractor and successfully modernize. In 1995, we first made recommendations to correct serious and pervasive modernization management and technical weaknesses. Since then, IRS has taken actions to address our recommendations. We have monitored these actions and have made follow-up recommendations that recognize IRS’ progress and define the residual steps that IRS needs to take to ensure that it is ready and capable to effectively modernize its systems. Currently, our open recommendations fall into three categories: (1) completing the modernization blueprint, (2) developing the management and engineering capability to effectively modernize systems, and (3) until the first two recommendations are implemented, limiting modernization spending to certain small, cost-effective, low-risk efforts. IRS’ initial expenditure plan is consistent with these recommendations. Specifically, of the $35.1 million being requested, IRS plans to use approximately $14.6 million for initiatives relating to completing the blueprint. For example, IRS plans to develop a 5-year “core business systems” modernization strategy that leverages new IT and recognizes IRS’ recent organizational restructuring and business process reengineering efforts prompted by the IRS Restructuring and Reform Act of 1998. The result is intended to be a revised, business risk-based sequencing plan that defines the general timing, cost, and benefits of new modernization projects over the next 3 to 5 years. In addition, IRS plans to spend about $11.6 million to develop the management and engineering capability to build and implement modernized systems. Specifically, IRS has designated about $2.2 million for PRIME and other contractor support to help IRS implement mature program management practices that are to (1) strengthen IRS’ ability to manage and control modernization initiatives and (2) ready IRS for an evaluation by SEI against relevant software/system acquisition capability maturity model requirements. IRS has earmarked $9.4 million for defining, documenting, and implementing its ELC, including training staff in its use, on ongoing modernization projects. Last, IRS plans to spend the remaining $8.9 million on selected relatively small, low-risk efforts. For example, IRS is seeking $5.1 million to, among other things, validate system requirements and update cost-effectiveness (i.e., business case) justifications for two ongoing projects intended to provide near-term customer service improvements via better routing of taxpayers telephone inquiries. In addition, IRS seeks to spend $3.2 million on defining the network and platform technology infrastructure needed to support the above two customer service initiatives and to provide the foundation for secure future electronic commerce between employees, tax practitioners, and taxpayers. Our review disclosed several additional relevant items concerning IRS’ management of the modernization. First, IRS has established a modernization “governance” structure that provides for extensive involvement by IRS’ top executives, including the Commissioner. This structure is an effective way to mitigate the risks associated with the various modernization initiatives that IRS has underway and planned. Second, although IRS plans to do so by July 1999, it has yet to adequately define respective systems modernization roles and responsibilities for itself, the PRIME contractor, and other support contractors. Given that IRS’ modernization approach provides for an unprecedented “partnership” with its contractors, ensuring that these roles and responsibilities are defined, understood, and enforced is of particular importance. Last, IRS can strengthen its incremental approach to investing in modernized systems by regularly disclosing to the Congress in its planned future expenditure plans IRS’ progress against the modernization expectations that it defined in the preceding expenditure plan. IRS has established a governance structure for managing its modernization initiatives and providing its top executives, including the Commissioner, direct and frequent visibility into and control over all initiatives/projects. This organizational structure is headed by the Core Business Systems Executive Steering Committee, which is chaired by IRS’ Commissioner and includes Treasury’s Assistant Secretary for Management and Chief Financial Officer, IRS’ Chief Information Officer, the Chief Operating Officer, key operating division heads, the PRIME contractor, and other key business officials. The Executive Steering Committee meets at least monthly to review modernization progress and direct future work. Under this process, projects are not initiated and do not progress to the next phase without the Steering Committee’s approval, thus mitigating the risk of modernization missteps and failures. Effective program/project and contract management requires a clear delineation of the respective roles and responsibilities of the agency management team and the contractors supporting the agency. In the case of IRS and its tax systems modernization program, this is particularly important because IRS’ stated intention in its solicitation and award documentation is to “partner” with the PRIME contractor and the supporting contractors. However, the nature of such a “partnership” is not defined in federal acquisition regulations, and thus is an ambiguous concept to implement and requires clear definition by IRS. In its efforts to date, however, IRS has yet to adequately define the respective roles of the service and its contractors. In January 1999, IRS tasked the PRIME contractor with (1) defining the roles and responsibilities of IRS, itself, and the other contractors and (2) explaining the structure and processes for managing the “partnership” between the service and itself. This task was to be completed by April 30, 1999. According to IRS officials, this task was not adequately completed for several reasons. First, the PRIME contractor’s tasking was not adequately defined and thus resulted in a deliverable that was too narrow in scope. Second, IRS subsequently became concerned that the PRIME contractor was not sufficiently independent enough to be defining roles and responsibilities for itself and IRS. Last, funding for the PRIME contractor began to run low. Consequently, IRS recently tasked one of its other support contractors to develop a “Concept of Operations document by July 1999 that defines the roles, responsibilities, authorities, structure, and rules of engagement for the PRIME contractor, IRS, and other IRS support contractors.” When employing an incremental approach to investing in systems modernization efforts, leading public and private sector organizations track and monitor whether each increment is producing promised benefits and meeting cost and schedule baselines, and report this information to executive decisionmakers. By doing so, these organizations can address variances from expectations incrementally, before significant dollars are expended. This is a proven way to effectively manage investment risks. To effectively employ incremental investment management on its modernization, IRS recognizes that it needs to incrementally measure and track progress and results. Accordingly, its governance structure and its ELC provide for doing so. In particular, its ELC is to incorporate SEI process maturity model requirements that, among other things, define key processes and approaches for measurement, analysis, and verification of activities. However, IRS has yet to define whether its planned future expenditure plans will provide for disclosure of this information. Such disclosure would provide the Congress with the kind of regular and valuable information that is needed to effectively oversee IRS’ modernization efforts. IRS’ initial expenditure plan lays the foundation for successful systems modernization; satisfies, for this $35 million increment, the conditions that the Congress placed on the use of ITIA funds; and is consistent with our past recommendations. IRS’ stated intention is to fully implement this expenditure plan and to submit to the Congress for approval future expenditure plans that incrementally build on this modernization foundation. Such an incremental approach to investing in modernized systems is an effective way to minimize the inherent risk in large, complex, multiyear modernization programs. The next step for IRS is to effectively implement the plan and fulfill its commitment to incrementally request and expend future modernization funds. A key factor in implementing its plans will be IRS’ success in establishing mature and disciplined measurement and tracking capabilities so that it can effectively analyze progress against incremental goals, deliverables, and benefit expectations and reliably report this information to congressional decisionmakers. By including this information in future expenditure plans submitted to the Congress, IRS can strengthen modernization management and oversight. Accordingly, we recommend that the Commissioner of Internal Revenue ensure that future expenditure plans fully disclose IRS’ progress against incremental goals, deliverables, and benefit expectations and that the expenditure plan that IRS plans to submit in October 1999 fully explain the nature and functioning of IRS’ “partnership” with its contractors, including the respective roles and responsibilities of IRS and its contractors. In commenting on a draft of this report, IRS agreed with our findings and recommendations and stated that it would ensure that future expenditure plans would address progress against expectations established in previous requests. IRS also commented on the effectiveness of our evaluation efforts and stated that our timely observations and comments have allowed IRS to move quickly to implement our recommendations. We are sending copies of this report to Senator Ted Stevens, Senator Robert C. Byrd, Senator William V. Roth, Jr., Senator Daniel Patrick Moynihan, Senator Orrin G. Hatch, Senator Max Baucus, Senator Fred Thompson, Senator Joseph I. Lieberman, Representative C.W. Bill Young, Representative David R. Obey, Representative Bill Archer, Representative Charles B. Rangel, Representative Amo Houghton, Representative William J. Coyne, Representative Dan Burton, Representative Henry A. Waxman, Representative Stephen Horn, and Representative Jim Turner in their capacities as Chairmen or Ranking Minority Members of Senate and House Committees and Subcommittees. We are also sending copies to Honorable Charles O. Rossotti, Commissioner of Internal Revenue, Honorable Robert E. Rubin, Secretary of the Treasury, Honorable Lawrence H. Summers, Deputy Secretary of the Treasury, and the Honorable Jacob J. Lew, Director of the Office of Management and Budget. Copies will also be made available to others upon request. If you or your staff have any questions about this report please contact me at (202) 512-6240 or by e-mail at hiter.aimd@gao.gov. Other key contributors to this report are listed in appendix III. Pursuant to the Department of the Treasury’s fiscal year 1998 and 1999 appropriations acts, the Congress limited IRS’ ability to obligate ITIA funds until the service and Treasury submitted to the Congress for approval an expenditure plan that per the acts, (1) implements the IRS Modernization Blueprint, (2) meets OMB’s investment guidelines for information systems, (3) is reviewed and approved by IRS’ Investment Review Board, OMB, and Treasury’s IRS Management Board and is reviewed by GAO, (4) meets the requirements of IRS system life cycle management program, and (5) is in compliance with acquisition rules, requirements, guidelines, and system acquisition management practices of the federal government. Accordingly, IRS provided us with the expenditure plan that it submitted to the Congress (i.e., the Senate on May 25, 1999, and the House on June 2, 1999). We reviewed the plan to determine whether (1) the plan satisfied the conditions specified in the acts, (2) the plan was consistent with our past modernization recommendations, and (3) we had any other observations on IRS’ systems modernization efforts. To determine whether IRS’ expenditure plan satisfied the conditions specified in appropriations acts, we first identified and reviewed the relevant IRS and federal documents referenced in the statutory conditions, such as the Modernization Blueprint, OMB information systems investment guidelines (e.g., Raines Rules), and the Federal Acquisition Regulation. We then documented IRS’ completed, ongoing, and planned modernization initiatives. To do this, we reviewed IRS’ ITIA Expenditure Plan; Initial Request for Funds; and other supporting documentation, such as the individual initiatives’ project plans and descriptions, briefing presentations (e.g., expenditure plan briefing to IRS Management Board), the PRIME contract and associated task orders, and Executive Steering Committee agendas and decision papers proposing courses of action. We also interviewed IRS’ Chief Information Officer and other service officials working on the modernization program to gain an understanding of what IRS is doing to satisfy the legislative conditions. This included receiving weekly briefings and reports on how IRS and contractor teams were progressing on ongoing initiatives, such as efforts to improve customer service, build capability to effectively acquire systems, establish a new system development life cycle methodology (i.e., ELC), and define IRS and contractor roles and responsibilities. We also reviewed the business and systems development life cycle methodology that IRS is modifying to develop its ELC and were briefed by IRS and its contractors involved in this effort. We also attended IRS’ Executive Steering Committee meetings to observe how IRS top management was directing and controlling the modernization program and to understand IRS’ strategic modernization approach and progress. Last, we analyzed each of IRS’ modernization initiatives vis-à-vis the statutory conditions to identify any variances or inconsistencies. To determine whether IRS’ expenditure plan is consistent with our past recommendations on the tax systems modernization, we extracted from our inventory of open recommendations those pertaining to IRS’ modernization and grouped them into the following three categories: (1) completing the Modernization Blueprint, (2) developing the management and engineering capability to effectively modernize systems, and (3) limiting modernization spending to certain small, cost-effective, low-risk efforts until the first two recommendations are implemented. We then compared IRS’ efforts on its completed, ongoing, and planned initiatives with the intent of our open recommendations to identify any variances or inconsistencies. To develop other observations on IRS’ systems modernization efforts, we analyzed IRS’ overall modernization governance structure to determine whether it provided for top management involvement and analyzed contractor deliverables against task order requirements and the December 9, 1998, contract awarded to the PRIME contractor. We also attended Executive Steering Committee meetings to observe how the Commissioner and committee members functioned with respect to established structures and processes, and to understand IRS’ plans for submitting future expenditure plans. In addition, we met with and interviewed the Chief Information Officer and IRS officials responsible for the day-to-day management and control of the program and the PRIME contractor, for development of the expenditure plan, and for definition of IRS and contractor roles and responsibilities. We performed our work at IRS headquarters in Washington, D.C., and its facility in Lanham, Maryland, from January 1999 through May 1999 in accordance with generally accepted government auditing standards. In addition to the above contact, Keith Rhodes, Agnes Spruill, Karen Richey, Lorne Dold, Sherrie Russ, Charles Roney, and Frank Maguire made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Internal Revenue Service's (IRS) initial Information Technology Investments Account (ITIA) expenditure plan, focusing on whether: (1) the plan satisfies the conditions specified in IRS' fiscal year 1998 and 1999 appropriations acts; (2) the plan is consistent with GAO's past recommendations on IRS' systems modernization; and (3) GAO's observations on the modernization efforts. GAO noted that: (1) IRS' initial expenditure plan is the first in a series of incremental expenditure plans that IRS plans to prepare over the life of the modernization; (2) the initial plan specifies IRS' modernization initiatives through October 31, 1999, and it seeks approval to obligate about $35 million to complete these initiatives; (3) such an incremental approach to investing in systems modernization efforts is a recognized best practice that leading public and private sector organizations use to mitigate the risk of program failure on large, complex, multiyear modernization programs; (4) IRS' initial expenditure plan is an appropriate first step toward successful systems modernization and, with regard to the $35 million being requested for this increment, satisfies the conditions that Congress placed on the use of ITIA funds; (5) the plan is consistent with GAO's past recommendations; (6) the initial expenditure plan provides for additional blueprint precision and specificity; (7) it provides for definition of system infrastructure specifications and a revised plan for sequencing the introduction of the new technology needed to achieve the target systems architecture over the next 3 to 5 years; (8) these initiatives are consistent with GAO's past recommendations for completing the blueprint and collectively they represent the first steps needed to satisfy the legislative condition to implement the blueprint; (9) the initial expenditure plan provides for definition and targeted implementation of an Enterprise Life Cycle, which is consistent with GAO's past recommendations for instituting project management rigor, software process maturity, and investment management discipline; (10) if implemented properly, this effort should satisfy the legislative condition for an IRS system life cycle and investment management program that meets the Office of Management and Budget guidelines; (11) building on its initial expenditure plan, IRS plans to define in subsequent expenditure plans the follow-on efforts and funding requirements needed to incrementally: (a) add needed architectural precision and project-specific management discipline; and (b) implement its Enterprise Life Cycle, and its target systems architecture; and (12) if IRS effectively implements the initiatives described in its initial expenditure plan and fulfills its commitment to incrementally request and expend future modernization funds, IRS would be acting in a manner that is consistent with the legislative conditions and GAO's past recommendations.
A complete and accurate address list is the cornerstone of a successful census, because it both identifies all households that are to receive a census questionnaire and serves as the control mechanism for following up with households that fail to respond. If the address list is inaccurate, people can be missed, counted more than once, or included in the wrong location. MAF is intended to be a complete and current list of all addresses and locations where people live or could live. The TIGER database is a mapping system that identifies all visible geographic features, such as type and location of streets, housing units, rivers, and railroads. To link these two separate databases, the Bureau assigns every housing unit in the MAF to a specific location in the TIGER, a process called “geocoding.” As shown in figure 1, for the 2000 Census the Bureau’s approach to building complete and accurate address lists and maps consisted of a number of labor- and data-intensive operations that sometimes overlapped and were conducted over several years. This effort included partnerships with the U.S. Postal Service and other federal agencies; state, local, and tribal governments; local planning organizations; the private sector; and nongovernmental entities. The Bureau employed thousands of temporary census workers to walk every street in the country to locate and verify places where people could live. Determining this was no simple task as people can reside in cars, sheds, illegally converted basements and garages, and similar nontraditional and often hidden living arrangements. For the 2000 Census, the Bureau found that the MAF/TIGER databases were less than complete and accurate. Although the number of errors was small in proportion to the total number of housing units at the national level, the errors could be problematic at lower levels of geography for certain purposes for which census data are used, such as allocating federal assistance to state and local governments. According to Bureau evaluations conducted after the 2000 Census, the final census count contained approximately 116 million housing units. However, the address file used to conduct the 2000 Census also contained a number of errors. Bureau evaluations estimate that there were 0.7 million duplicate addresses, 1.6 million vacant housing units misclassified as occupied, 1.4 million housing units not included, 1.3 million housing units improperly deleted, and 5.6 million housing units incorrectly located on census maps. In light of these and other problems, the Bureau made enhancing the MAF/TIGER one of three critical components to support the 2010 Census. The other two components are replacing the long form questionnaire with the American Community Survey and conducting a short-form-only decennial census that is supported by early research and testing. For the 2010 Census, the Bureau is making extensive use of contractors to provide a number of mission-critical functions and technologies. One of the technologies to be provided by a contractor is the MCD. Under a contract awarded on March 30, 2006, a new MCD will be developed for the 2008 Dress Rehearsal. To date, the Bureau has tested two models of the MCD—one during the 2004 Census Test and another during the 2006 Census Test. In January 2005, we reported that the MCD used during the 2004 Census Test to collect nonresponse follow-up data experienced problems transmitting, and the mapping feature was slow. Consistent with our recommendations, the Bureau took steps to improve the dependability of transmissions and correct the speed of the mapping feature. Due to the critical role of contractors to help carry out the 2010 Census, we conducted a review of major acquisitions for the 2010 Census. footnote number should not start a line) In that report issued in May 2006, we highlighted the tight time frames the FDCA contractor has for developing and implementing systems to support the upcoming 2008 Dress Rehearsal and recommended that the Bureau ensure that all systems are fully functional and ready to be assessed in time for the Dress Rehearsal. In addition, on March 1, 2006, we testified on the status of the FDCA project. In that testimony, we discussed the need for the Bureau to validate and approve a baseline set of operational requirements for the FDCA contract, because if not, the FDCA project would be at risk of having changes to requirements, potentially affecting its ambitious development and implementation schedule; implement an effective risk management process that identifies, prioritizes, and tracks project risks; and select detailed performance measures for tracking the contractor’s work. In response to our work, the Bureau stated that they plan to complete these activities as soon as possible. While the Bureau’s MAF/TIGER modernization efforts have progressed in a number of areas, uncertainties and risks remain in dealing with address- related problems that affected the 2000 Census. Currently it is not known whether ongoing research to resolve those problems will be completed in sufficient time to allow the Bureau to develop new methodologies and procedures for improving the MAF by June 2007—the Bureau’s announced deadline for baselining all program requirements. One significant cause for this uncertainty is that some deadlines for completing research do not have firm dates, while other deadlines that have been set continue to slip. In addition, one major research effort using software to identify duplicate addresses (an estimated 1.4 million duplicate addresses were removed during the 2000 Census) did not work any better at identifying true duplicates than what the Bureau already had in place and will not be used in 2010. As a result, duplicate addresses may still be a problem for the 2010 MAF, and to the extent they are not detected, can result in reduced accuracy and increased cost. During the 2000 Census, the Bureau encountered a number of problems with the MAF including (1) missed addresses, where the Bureau failed to include addresses in the MAF; (2) improperly deleted addresses, where the Bureau removed otherwise valid addresses from the MAF; (3) duplicate addresses, with two or more addresses for the same housing unit; and (4) geocoding errors, where addresses were improperly located on a census map. All of the errors affect the quality of census data. When detected, the errors can increase the cost of the census to the extent they result in rework. Moreover, these errors are associated with a variety of living arrangements and addresses, including small, multi-unit dwellings; dormitories, prisons, and other group living facilities, known collectively as “group quarters,” as well as hidden housing units, such as converted basement apartments. As shown in table 1, to address those problems the Bureau has been conducting research and making some operational changes. Although research to find hidden housing units holds promise for a more accurate census, whether the results will be delivered in time to be useful for the 2010 Census is uncertain. While Bureau officials do not have a firm date for completing this research, they do estimate it will be completed by the end of 2006. According to Bureau evaluations, approximately 1.4 million housing units were missed in the 2000 Census. Missed addresses often result when temporary census workers do not recognize that particular structures, such as tool sheds, are being used as residences. Addresses can also be missed when census workers fail to detect hidden housing units, such as basement apartments, within what appear to be single housing units. This is especially true for urban areas, where row houses have been converted into several different apartments. If an address is not in the MAF, its residents are less likely to be included in the census. In May 2003, Bureau staff met with the New York City Planning Department to discuss and observe the address problems associated with small multi-unit structures in Queens, New York. After the visit, the Bureau concluded that delivering questionnaires to small multi-unit structures was a problem that needed to be addressed. In response, the Bureau is using the MAF to identify urban areas, including Baltimore, an area west of Chicago, and counties in New Jersey, where small multi-unit dwellings exist, fitting the description of those that were missed. According to Bureau officials, to accurately identify and count these missed housing units, the Bureau would use update/enumerate procedures—where census workers update the address list and conduct interviews to collect census data—instead of using mailout/mailback procedures, where census forms are mailed to the housing units. Update/enumerate procedures are more labor–intensive and costly than mailout/mailback procedures. In reviewing the research plan on small multi-unit structures,we found no milestones for completing this research. Bureau officials could not provide a firm completion date, but estimated that the research would be completed by the end of 2006. Without clear milestones for completing this research and action plans based on research results, it is uncertain whether the Bureau will have sufficient time to develop a methodology for identifying all the problematic locations across the country where update/enumerate methodology would be necessary and to inform decision makers on the cost of converting these areas from mailout/mailback procedures to update/enumerate procedures. The Bureau has tested new procedures to validate whether an address initially marked “delete” should be removed from the address file. However, the results from that testing, due January 2006, were delayed until April 2006, and were not available at the time of this review. For the 2000 Census, the Bureau found that it had mistakenly deleted 1.3 million existing housing units from the address file used to conduct the census. In some instances, this occurred when the Bureau deleted an address that the U.S. Postal Service had coded as a business address, although people were living at that address. According to a Bureau evaluation, when this happens, the Bureau relies on census workers to find and add back those units. Bureau officials stated that identifying residential housing units is difficult for some structures, such as apartments in businesses. The Bureau would also delete an address if no census form was returned from the unit and if two other census operations determined that the address should be deleted. A Bureau evaluation found that this process identified and removed 8.3 million nonexistent addresses; however, about 653,000 of those addresses were valid and should not have been deleted. The evaluation does not provide an explanation for why these valid addresses were deleted or what could be done in the future to prevent valid addresses from being removed. Concerned that valid addresses were deleted, the Bureau, for the 2006 Census Test of address canvassing, tested a new follow-up quality check procedure designed to verify the status of all addresses that were identified as “delete” during the address canvassing operation. The 2000 Census did have a follow-up operation, but not one specifically for all deleted addresses during the canvassing operation. By building this quality control operation into the address canvassing operation, the Bureau hopes to prevent valid addresses from getting inadvertently deleted. An assessment report of address procedures that were tested in 2005 as part of the 2006 address canvassing operation was to be completed by January 2006. However, the deadline for this assessment slipped until the end of April 2006, and was not available at the time of this review. The Bureau has taken actions to prevent duplicate addresses. However, one research effort to identify duplicates using software was found to be ineffective because approximately 10 percent of the time the software would incorrectly identify a valid address as a duplicate address, and as a result, this software will not be used in 2010. According to Commerce officials, it is their philosophy to favor the inclusion of addresses in the census process over the exclusion of addresses. Nevertheless, preventing duplicate addresses in the MAF saves the Bureau from having to make unnecessary and expensive follow-up visits to households already surveyed. Furthermore, preventing duplicate responses also enhances the accuracy of the data. Bureau studies initially estimated that during the 2000 Census, about 2.4 million duplicate addresses existed in the MAF. The problem was so significant that in the summer of 2000, the Bureau initiated a special follow-up operation to identify and remove duplicate addresses. Research from this special operation confirmed that 1.4 million addresses were duplicates, and the Bureau removed those addresses from the census. However, the operation was not able to determine with certainty whether the remaining 1 million addresses were duplicates. As a result, according to Commerce officials, the 1 million addresses were not removed from the census because those addresses were believed to be a combination of apartment mix-ups and misdelivery of questionnaires, and not duplicates. Had the Bureau identified these 1.4 million housing units before nonresponse follow-up had occurred, it could have saved $39.7 million (based on our estimate that a 1 percentage point increase in workload could add at least $34 million in direct salary, benefits, and travel costs to the price tag of nonresponse follow-up). Even after the special operation to remove duplicates was completed, the Bureau still estimated that approximately 0.7 million duplicates remained in the MAF in error. According to Bureau officials, duplicate addresses resulted from the multiple operations used to build the MAF. While the redundancy of having multiple address-building operations helps produce a more complete and accurate address list because more opportunities exist for an address to be added to the MAF, any variations in city-style addresses, which are addresses with house numbers and street names, could produce a duplicate entry. For example, the Postal Service, which is the source of many addresses in the MAF, might refer to an address in its database as 123 Waterway Point. A census worker in another address operation might record that address as 123 South Waterway Point. If not detected, two addresses would remain in the MAF for this single residence. To help resolve this problem, in 2004, the Bureau tested whether it could detect duplicate addresses in the MAF by using computerized matching software to link variations in street addresses. In test results, the Bureau found that 90 percent of the potential duplicates identified by the process of “probablistic matching” were actual duplicates, while 10 percent were valid addresses. Because the number of false duplicates was significantly high, the Bureau decided against incorporating this approach into its plans for 2010 and planned no further testing of the software. As a result of not being able to use this software, duplicate addresses may still be a problem for the 2010 MAF, and duplicate addresses that are not detected can reduce accuracy and increase costs. At the same time, the Bureau has made some progress toward preventing duplicates. The Bureau is testing new methods to resolve difficulties in distinguishing group quarters (which include dormitories, prisons, group homes, and nursing homes) from housing units, such as single-family homes and apartments. In the 2000 Census, the Bureau used different operations and compiled separate address lists for group quarters and housing units. Group quarters are sometimes difficult for census workers to identify because they often look the same as conventional housing units (see fig. 2). As a result, these homes were sometimes counted twice during the 2000 Census—once as a group quarter and once as a housing unit. One approach to help prevent duplicates that the Bureau tested during the 2004 and 2006 Census Tests is integrating the two address lists and then verifying potential group quarters on that list. Evaluation results from the 2004 testing showed progress was being made for integrating the address lists. The operational assessment report on the 2006 group quarters testing validation/advance visit operation that occurred in 2005, as a part of the address canvassing operation for the 2006 Census Test, was expected by May 30, 2006, and was not available at the time of this review. The Bureau is using a contractor to update its TIGER maps and intends to use GPS technology to locate every housing unit across the country precisely. Collectively, these two efforts are designed to avoid the geocoding errors of the 2000 Census, when residences were sometimes counted in the wrong census block. However, progress can be hindered if technical problems associated with the GPS continue. Bureau evaluations estimated that in 2000, of the nation’s approximately 116 million housing units, 5.6 million (about 4.8 percent) housing units in the country were counted in the wrong locations. Resolving geocoding errors will be important, as census data are used to redraw congressional lines and allocate federal assistance and state funding. For example, in June 2005, we reported that Soledad, California, lost more then $140,000 in state revenue when a geocoding error caused over 11,000 Soledad residents to be miscounted in two nearby cities. Geocoding errors are partly attributable to inaccuracies in the TIGER maps that census workers use to verify the locations of residences. As shown in figure 3, roads and other features on TIGER maps did not always reflect their true geographic locations. To help improve TIGER maps, in June 2002, the Bureau awarded an 8-year, $200 million contract to correct in TIGER the location of every street, boundary, and other map feature so that they are aligned with their true geographic locations, among other contractual tasks. This work is to be completed on a county-by-county schedule. According to Bureau officials, as of March 2006, nearly 1,700 county maps have been completed, with about another 1,600 to be completed by April 2008. In conjunction with updating TIGER, the Bureau, as part of its 2010 address canvassing operations, plans to have census workers capture the exact location of every structure on the address list by using GPS receivers. This approach has the potential to resolve the cause of many geocoding errors; however, as we discuss later in this report, when this operation was tested as part of the 2006 Census Test, the GPS receiver did not always operate properly, leaving some housing units without a GPS coordinate to determine their locations. As part of the address canvassing operational assessment report, the Bureau will provide the number and type of map spots collected (GPS, manual, or attached multi-unit). This report, initially due in January 2006, has been delayed and was not available at the time of our review. Testing GPS coordinates was a part of the 2004 Census Test, and evaluations showed that workers only used the GPS receiver to capture the location of housing units 55 percent of time. The evaluation, however, did not address why census workers did not use the GPS receiver. As the Bureau has planned for the 2010 Census, issues surrounding the schedule of address activities have emerged and have not been fully addressed. One key challenge in conducting the 2010 Census is the Bureau’s ability to keep the myriad of census activities on track amid tight and overlapping schedules for updating addresses and maps. For example, in planning the various 2010 address list activities, Bureau officials estimate that TIGER maps for 600 to 700 counties (out of 3,232 counties in the United States) will not be updated in time to be part of the local update of census addresses (LUCA)—a program through which the Bureau gives local, state, and tribal government officials the opportunity to review and suggest corrections to the address lists and maps for their jurisdictions. LUCA is to begin in August 2007, when, according to the current schedule, the Bureau will still have to update 368 counties in 2008 alone. Because all updates will not have been completed, some counties will not have the most current maps to review, but instead will be given the most recent maps the Bureau has available. According to Bureau officials, some maps have been updated for the American Community Survey, but others have not been updated since the 2000 Census. LUCA participation is important because local knowledge contributes to a more complete and accurate address file. Not having the most current TIGER maps could affect the quality of a local government’s review. The Bureau is aware of the overlapping schedules, but officials stated that they need to start LUCA in 2007 in order to complete the operation in time for address canvassing— an operation where census workers walk every street in the country to verify addresses and update maps. Further, Commerce officials stated that the primary focus of the LUCA program is to review and update the address list and not to review and update maps; therefore, not having the improved maps should not affect the ability of LUCA participants to add or make corrections to the census address list. We, however, believe that improved maps would help LUCA participants to provide more accurate address data. The census schedule will be a challenge for address canvassing in 2010. The Bureau has allotted 6 weeks for census workers to verify the nation’s inventory of approximately 116 million housing units. This translates into a completion rate of over 2.75 million housing units every day. The challenge in maintaining this schedule can be seen in the fact that for the 2000 Census, the Bureau took 18 weeks just to canvass “city-style” address areas, which are localities where the U.S. Postal Service uses house- number and street-name addresses for most mail delivery. However, a Bureau official could not explain why the schedule had been shortened by 12 weeks, compared to the 2000 Census. Although Bureau officials agreed that more time will be needed to conduct the address canvassing operation, especially in the northern sections of the country where bad weather can hinder those operations, they have not reevaluated the schedule. A Bureau official stated that the Bureau would need to assess staffing levels to ensure it will be able to meet workload demands. Meeting the demands of the shortened time frame for completing address canvassing is a concern because the workload for address canvassing has significantly expanded from including only urban areas in 2000 to including the entire country for 2010. Furthermore, in the summer of 2005, when address canvassing was conducted for the 2006 test, the Bureau was unable to finish in 6 weeks because of problems with the new MCD and GPS technology. In its comments to a draft of this report, Commerce officials said it would work to expand the address canvassing schedule to ensure that it can be done without having a negative impact on other critical decennial activities. The Bureau’s ability to collect and transmit address and mapping data using the MCD is not known. The performance of these devices is crucial to the accurate, timely, and cost-effective completion of address listing, nonresponse follow-up, and coverage measurement activities. During 2006 testing, the MCD used to collect address and map data was slow and locked up frequently. As a result, the Bureau was unable to complete address canvassing, even with a 10-day extension. Also, some census workers were not always able to get GPS signals for collecting coordinates for housing units. Bureau officials have acknowledged that the MCD’s performance is an issue but believe that a new version of the MCD, to be developed under the Field Data Collection Automation (FDCA) contract awarded on March 30, 2006, will be reliable and functional. However, because the 2008 Dress Rehearsal will be the first time this new MCD will be tested under census-like conditions, it is uncertain how effective that MCD will be, and if problems do emerge, little time will be left for the contractor to develop, test, and incorporate any refinements. Moreover, if after the Dress Rehearsal the MCD is found to be unreliable, the Bureau could be faced with the remote, but daunting possibility of having to revert to the costly paper-based census used in 2000. During the address canvassing operation, the technical problems with the MCDs were so significant that the operation did not finish as scheduled. The 6 week operation was expected to run through September 2, 2005, but had to be extended by 10 days (through Sept. 12, 2005). However, the Bureau was still unable to finish the operation, leaving six assignment areas in Travis County, Texas and four assignment areas at the Cheyenne River Reservation, South Dakota not canvassed. To conduct address canvassing, each MCD was loaded with address information and maps and was also equipped with GPS. Census workers were trained to locate every structure in their assignment area, as well as to compare the locations of housing units to address and map data on the MCD and update the data accordingly. They also were instructed to capture each housing unit’s GPS coordinates. However, workers we observed and interviewed had problems updating address and map data as well as collecting GPS coordinates, largely because the device’s software and GPS receiver were unstable. For example, we observed census workers unable to complete their planned assignments for the day because it took too long to complete address and map updates, as the device was slow to pull up and exit address registers, accept the data entered by the worker, and link a map spot to addresses for multi-unit structures. Furthermore, the devices would often lock up, requiring workers to reboot them. Census workers also experienced problems with the GPS receiver acquired by the Bureau. Some workers had problems getting a signal, but even when a signal was available, the GPS receiver was slow to locate assignment areas and provide coordinates for map spots. Bureau officials were not certain why the Bureau’s equipment was unreliable, but provided several possible explanations: (1) the software, hardware, or both did not function properly, (2) GPS units were not correctly inserted into the device, and (3) too few satellites were available for capturing coordinates. Given the importance of GPS to collecting precise coordinates for housing units, it will be important for the Bureau to understand and correct the source of the problems that affected the reliability of the GPS. Going into address canvassing, the Bureau was aware that the MCDs had software problems and delayed the address canvassing operation by a month to try to resolve them. The Bureau was unable to resolve the problems, but wanted to test the feasibility of the MCD and decided to go forward with the operation with the goal of learning as much as possible. For the 2008 Dress Rehearsal, the Bureau plans to test a new MCD that is being developed under the FDCA contract. However, less than a year remains for the contractor to develop the MCD that will be used in April 2007 for the canvassing operation of the 2008 Dress Rehearsal. In a May 2006 report, we reported on the tight time frames to develop the MCD and recommended that systems being developed or provided by contractors for the 2010 Census—including the MCD—be fully functional and ready to be assessed as part of the 2008 Dress Rehearsal. In commenting on a draft of this report, Commerce noted that the Bureau designed the FDCA acquisitions strategy to reduce risks related to cost, schedule and performance, stating that the Bureau required offerors to develop and demonstrate a working prototype for address canvassing. Nevertheless, because the previous two MCD models had performance problems, the introduction of a new MCD adds another level of risk to the success of the 2010 Census. The Bureau does not have a plan to update the MAF/TIGER for areas affected by hurricanes Katrina and Rita. On August 29, 2005, Hurricane Katrina devastated the coastal communities of Louisiana, Mississippi, and Alabama. A few weeks later, Hurricane Rita plowed through the border areas of Texas and Louisiana. Damage was widespread. In the wake of Katrina, for example, the Red Cross estimated that nearly 525,000 people were displaced. Their homes were declared uninhabitable, and streets, bridges, and other landmarks were destroyed. Approximately 90,000 square miles were affected overall and, as shown in figure 4, entire communities were obliterated. The task of updating MAF/TIGER for 2010 to reflect these changes will be a formidable one, as much has changed since the 2000 Census. For the 2010 Census, locating housing units and the people who reside in them will be critical to counting the population of places hit by the hurricanes, especially since it is estimated that hundreds of thousands of people have—either temporarily or permanently—migrated to other areas of the country. To ensure an accurate count, it will be important for the Bureau to have accurate maps and an updated address file for the 2010 Census in those areas affected by hurricanes Katrina and Rita. Bureau officials do not believe a specific plan is needed to update the address and map files for those areas affected by hurricanes Katrina and Rita. Although Census Day is still several years away, preliminary activities, such as operations for building the MAF, have to occur sooner. Consequently, a key question is whether the Bureau’s existing operations are adequate for capturing the dramatic changes to roads and other geographic features caused by the hurricanes, or whether the Bureau needs to develop enhanced or additional procedures before August 2007 when LUCA is scheduled to begin. For example, new housing and street construction in the areas affected by the hurricanes could require more frequent updates of the Bureau’s address file and maps. Also, local governments’ participation in LUCA might be affected because of the loss of key personnel, information systems, or records needed to verify the Bureau’s address lists and maps. Further, the Bureau has not identified local partners with whom it can monitor this situation. The Bureau’s short-term strategy for dealing with the effect of the hurricanes on MAF/TIGER is to see who returns and whether communities decide to rebuild. Bureau officials stated that by 2009, as census workers prepare to go out in the field for address canvassing for the 2010 Census, residents will have decided whether to return to the region. The Bureau believes that by then it will be in a better position to add or delete addresses for areas in the Gulf region affected by the hurricanes. However, Bureau officials could not provide us with information on the basis of their conclusion that by 2009, most affected persons will have made final decisions about whether they are returning to the region. This approach may not be adequate, given the magnitude of the area, population, and infrastructure affected. Therefore, it would be prudent for the Bureau to begin assessing whether new procedures will be necessary, determining whether additional resources may be needed, and identifying whether local partners will be available to assist the Bureau in its effort to update address and map data, as well as in other census-taking activities. In its comments on a draft of this report, Commerce officials stated that there was a team working on how to reflect the impact of the hurricanes in the MAF and that they were aware of the sensitive nature of working with local officials on using data that had not been updated since the catastrophe. Securing a complete count, a difficult task under normal circumstances, could face additional hurdles along the Gulf Coast, in large part because the baseline the Bureau will be working with—streets, housing, and the population itself—will be in flux for some time to come. According to Bureau officials, different parts of the agency work on hurricane-related issues at different times, but no formal body has been created to deal with the hurricanes’ impact on the 2010 Census. The success of the 2010 Census relies on an accurate and complete MAF, and the Bureau has taken steps to improve the MAF. For example, many of the problems identified in the 2000 Census are being addressed through sequential address list building, the collection of GPS coordinates, and the verifications of deleted addresses. However, several key challenges and sources of uncertainty remain. The management of some of the Bureau’s research efforts to resolve problems from the 2000 Census are negatively affected by a lack of specific end dates for that research or because those end dates have slipped. Also, one research effort to prevent duplicate addresses was found to be ineffective and was abandoned altogether. Time to complete research and take the appropriate resulting action is of the essence, as the Bureau has announced that all design features should be baselined by June 2007. If long-standing problems are not resolved, the address file could experience the same problems with missed and incorrectly included housing units as it did in the 2000 Census. The Bureau must also manage the planning and development of the census amid tight and overlapping schedules. In our view, changing milestones to complete MAF research, the Bureau’s tight development schedule for the MCD, and the interdependence of the various address activities could affect the Bureau’s ability to develop a fully functional set of address- building operations that can be tested along with other census operations during the 2008 Dress Rehearsal—the Bureau’s last opportunity to assess MAF/TIGER under near census-like conditions. If the MCDs do not function as planned in the Dress Rehearsal, little time will remain for the Bureau to develop, test, and incorporate any refinements. This uncertainty places the accuracy and completeness of data collected using the MCD at risk. Because the MCD has not yet been developed, it will be important for the Bureau to closely monitor the contractor’s progress for developing the MCD. In May 2006, we reported on the tight time frames to develop the MCD and recommended that systems being developed or provided by contractors for the 2010 Census—including the MCD—be fully functional and ready to be assessed as part of the 2008 Dress Rehearsal. However, if after the Dress Rehearsal the MCD is found to be unreliable, the Bureau could be faced with the remote but daunting possibility of having to revert to the costly paper-based census used in 2000. Finally, the destruction and chaos caused by hurricanes Katrina and Rita underscore the nation’s vulnerability to all types of hazards and highlight how important it is for government agencies to consider emergency preparedness and continuity of operations as part of their planning. However, the immediate concern for the 2010 Census is that the Bureau has no plan for how it will successfully update MAF/TIGER in the affected hurricane zone. If problems updating the address file and maps do occur, the census count in those areas affected by hurricanes Katrina and Rita could be inaccurate or incomplete. In conversations with Bureau officials, it became apparent to us that they are keenly aware of the existing time constraints and challenges detailed above. However, the Bureau had not developed risk mitigation plans to address these challenges. Our recommendations, therefore, are intended to make transparent for Bureau managers and congressional decision makers how those challenges can and should be addressed. At a minimum, the Bureau should have a risk-based mitigation plan in place that includes specific dates for completing research on the address file and an approach for exploring the difficulties that the Bureau may face updating MAF/TIGER along the Gulf Coast. Because time is running short, it is imperative that the Bureau continue to stay focused on identifying and resolving problems to ensure that the most accurate and complete address file and maps are produced for the 2010 Census. To mitigate potential risks facing the Bureau as it plans for 2010 and to ensure a more complete and accurate address file for the 2010 Census, we recommend that the Secretary of Commerce direct the U.S. Census Bureau to take the following three actions: Establish firm deadlines to complete research, testing, and evaluations of the MAF to prevent missed, deleted, and duplicate addresses, as well as map errors, and develop an action plan that will allow sufficient time for the Bureau to revise or establish methodologies and procedures for building the 2010 MAF. Reevaluate the 2010 address canvassing schedule in areas affected by bad weather, as well as staffing levels, to ensure that the status of all housing units are accurately verified throughout the entire country. Develop a plan, prior to the start of LUCA in August 2007, that will assess whether new procedures, additional resources, or local partnerships may be required to update the MAF/TIGER databases for areas along the Gulf Coast affected by hurricanes Katrina and Rita. On June 2, 2006, the Department of Commerce forwarded written comments from the Bureau on a draft of this report. The Bureau agreed with each of our three recommendations and also noted actions it was taking to address the recommendations. The Bureau’s comments also included some technical corrections and suggestions where additional context was needed, and we revised the report to reflect these comments as appropriate. The comments are reprinted in their entirety in appendix II. In responding to the first recommendation to develop an action plan that will allow sufficient time to revise or establish methodologies or procedures for building the 2010 MAF, the Bureau stated that it would revise its action plan to reflect final milestones for research to be completed in time for the 2010 Census. Regarding the second recommendation to reevaluate the 2010 address canvassing schedule, as well as its staffing, the Bureau stated that this will be a challenge but that it is committed towards developing a new schedule. Finally, in addressing our third recommendation to develop a plan to assess whether new procedures, additional resources or local partnerships may be required to update the MAF/TIGER databases for areas affected by hurricanes Katrina and Rita, the Bureau stated that it was working on a proposal for additional work in the areas affected by hurricanes Katrina and Rita. The Bureau also noted that conducting additional work will be subject to obtaining funding. We are sending copies of this report to other interested congressional committees, the Secretary of Commerce, and the Director of the U.S. Census Bureau. Copies will be made available to others upon request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-6806 or farrellb@gao.gov if you have any questions about this report. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine the extent to which the Bureau’s MAF/TIGER modernization efforts are addressing problems experienced during the 2000 Census, we reviewed pertinent documents, including evaluations of the 2000 Census conducted by GAO, the Bureau, the National Academy of Sciences, and the Department of Commerce’s Office of Inspector General. To determine the status of those efforts, we also interviewed cognizant Bureau officials in the Geography Division and Decennial Management Division responsible for implementing the modernization efforts. To assess the extent to which past problems were being addressed, we compared the Bureau’s current efforts—including, but not limited to, the 2010 LUCA draft plan, 2004 and 2006 test plans, other research efforts, and TIGER improvement documents—to problems identified in evaluations of the 2000 Census conducted by GAO, the Bureau, the National Academy of Sciences, and the Department of Commerce’s Office of Inspector General. We reviewed the MAF/TIGER contract that was awarded in June 2002 to update the street and geographic features for the TIGER maps, as well as monthly earned-value management system (EVMS) cost and performance reports, to determine whether the deliverable schedule for the contract was on time and on budget. We did not independently verify the accuracy of the data contained in the EVMS cost and performance reports, but we did obtain a certification from the contractor that its EVMS was adequate to provide timely and accurate data from the Defense Logistics Agency. To determine the extent to which the Bureau is managing emerging MAF/TIGER issues, we focused on planning documents that described proposed 2010 plans. Specific documents we reviewed included the 2010 LUCA draft proposal, 2010 Census decision memorandums, and Bureau papers from National Academy of Sciences and Census Advisory Committee meetings. We also reviewed and compared the timeline for conducting 2000 Census address operations to the proposed plan for conducting 2010 Census address operations. We interviewed officials from the Bureau’s Geography Division and the Decennial Management Division on the 2010 plans, 2010 time lines, current status of work, and areas of concern. To assess the extent to which the Bureau is able to collect and transmit address data using new, GPS-enabled mobile computing devices, we made site visits to census offices on the Cheyenne River Reservation, South Dakota, and in Travis County, Texas, where we observed the address canvassing operation conducted during the summer of 2005 as part of the 2006 Census Test. During these site visits, we also interviewed local and regional census managers and staff, observed address data collection activities using the MCD, and attended census worker training sessions. We observed and interviewed a total of 38 census workers (16 in South Dakota and 22 in Texas) about the address canvassing operation and the use of the MCD to collect address data. However, the results of these observations are not necessarily representative of the larger universe of census workers. After our visits, we discussed our observations with the Bureau’s Technology Management Office, Field Division, Geography Division, and Decennial Management Division. Finally, to determine the extent to which the Bureau has a plan to update the address file and maps in areas impacted by hurricanes Katrina and Rita, we interviewed Bureau top management officials. Specifically, we discussed whether the Bureau had taken any steps to assess the difficulties it may encounter as it attempts to update the address file and maps and count persons affected by hurricanes Katrina and Rita. We conducted our work from June 2005 through April 2006 in accordance with generally accepted government auditing standards. In addition to the contact named above, Carlos Hazera, Assistant Director; Sheranda Smith Campbell; Betty Clark; Tim DiNapoli; Robert Goldenkoff; Shirley Hwang; Sonya Phillips; Lisa Pearson; Ilona Pesti; and Brendan St. Amant made key contributions to this report.
To conduct a successful census, it is important that the U.S. Census Bureau (Bureau) produce the most complete and accurate address file and maps for 2010. For this review, GAO's specific objectives were to determine the extent to which (1) the Bureau's efforts to modernize the address file and maps are addressing problems experienced during the 2000 Census, (2) the Bureau is managing emerging address file and map issues, (3) the Bureau is able to collect and transmit address and mapping data using mobile computing devices (MCD) equipped with global positioning system (GPS) technology, and (4) the Bureau has a plan to update the address file and maps in areas affected by hurricanes Katrina and Rita. GAO reviewed the Bureau's progress in modernizing both the address file and maps. The Bureau's address and map modernization efforts have progressed in some areas. The Bureau is researching how to correct addresses that were duplicated, missed, deleted, and incorrectly located on maps. However, some deadlines for completing research are not firm, while other deadlines that had been set continue to slip. Thus, whether research will be completed in enough time to allow the Bureau to develop new procedures to improve the 2010 address file is unknown. Also, the Bureau has not fully addressed emerging issues. For one such issue, the Bureau has acknowledged the compressed time frame for completing address canvassing--an operation where census workers walk every street in the country to verify addresses and maps--but has not reevaluated the associated schedule or staffing workloads. Also, the Bureau has allotted only 6 weeks to conduct address canvassing it completed in 18 weeks in 2000 and expanded the operation from urban areas in 2000 to the entire country in 2010. Whether the Bureau can collect and transmit address and mapping data using the MCD is unknown. The MCD, tested during 2006 address canvassing, was slow and locked up frequently. Bureau officials said the MCD's performance is an issue, but a new MCD to be developed through a contract awarded in March 2006 will be reliable. However, the MCD will not be tested until the 2008 Dress Rehearsal, and if problems emerge, little time will remain to develop, test, and incorporate refinements. If after the Dress Rehearsal the MCD is found unreliable, the Bureau could face the remote but daunting possibility of reverting to the costly paper-based census of 2000. Bureau officials do not believe a specific plan is needed to update the addresses and maps for areas affected by the hurricanes. Securing a count is difficult under normal conditions, and existing procedures may insufficient to update addresses and maps after the hurricanes' destruction--made even more difficult as streets, housing, and population will be in flux.
In serving as the federal government’s human capital agency, OPM sees its role to be the President’s strategic advisor on human capital issues, to develop tools and provide support to agencies in their human capital transformation efforts, and to assist in making the federal government a high-performing workplace. As such, OPM, in conjunction with the Office of Management and Budget (OMB), is charged with leading the federal government’s strategic management of human capital initiative, one of five governmentwide initiatives of the President’s Management Agenda. In carrying out this effort, OPM’s strategy is to provide human resources management leadership and services to all agencies in a manner that blends and balances flexibility and consistency. As we noted in our recent report on OPM’s management challenges, OPM carries out its leadership role in a decentralized environment where both it and the agencies have shared responsibilities for addressing the human capital and related challenges facing the federal government. OPM’s role in aiding federal agencies represents a considerable challenge because federal managers have complained for years about the rigid and elaborate procedures required for federal personnel administration and have often expressed the need for more flexibility within a system that has traditionally been based on uniform rules. Reformers have long sought to decentralize the personnel system and simplify the rules, arguing that however well the system may have operated in the past, it is no longer suited to meet the needs of a changing and competitive world. In 1983, for example, NAPA published a report critical of excessive restrictions on federal managers, including constraints on their human resources decisions. In response to these criticisms, OPM has, over time, decentralized and delegated many personnel decisions to the agencies and has encouraged agencies to use human capital flexibilities to help tailor their personnel approaches to accomplish their unique missions. Our strategic human capital management model also advocates that agencies craft a tailored approach to their use of available flexibilities by drawing on those flexibilities that are appropriate for their particular organizations and their mission accomplishment. Because of this tailoring, the federal personnel system is becoming more varied, notwithstanding its often-cited characterization as a “single employer.” The overall trend toward increased flexibility has revealed itself in a number of ways, including the efforts of some agencies to seek congressional approval to deviate from the personnel provisions of Title 5 of the U.S. Code that have traditionally governed much of the federal government’s civil service system. As observed in a 1998 OPM report, federal agencies’ status relative to these Title 5 personnel requirements can be better understood by thinking of them on a continuum. On one end of the continuum are federal agencies that generally must follow Title 5 personnel requirements. These agencies do not have the authority, for example, to establish their own pay systems. On the other end of the continuum are federal agencies that have more flexibility in that they are exempt from many Title 5 personnel requirements. For example, the Congress provided the Tennessee Valley Authority and the Federal Reserve Board with broad authority to set up their own personnel systems and procedures. This trend toward greater flexibility, in fact, has gained momentum to the extent that about half of federal civilian employees are now exempt from at least some of the personnel-related requirements of Title 5. For example, the Federal Aviation Administration, the Internal Revenue Service, and the new Department of Homeland Security have exemption from key Title 5 requirements. In addition to receiving congressional authorizations for exemptions from the personnel-related requirements of Title 5, other mechanisms are available to initiate human capital innovations and flexibilities within federal agencies. OPM has the authority to reassess and make changes to its existing regulations and guidance to supply agencies with additional flexibilities. Additionally, a federal agency can obtain authority from OPM to waive some existing federal human resources laws or regulations through an OPM-sponsored personnel demonstration project. The aim of these demonstration projects is to encourage experimentation in human resources management by allowing federal agencies to propose, develop, test, and evaluate changes to their own personnel systems. In some cases, Congress has allowed some agencies to implement alternatives that have been tested and deemed successful. For example, more flexible pay approaches that were tested within the Department of the Navy’s China Lake (California) demonstration project in the early 1980s were eventually adopted by other federal agencies such as the Department of Commerce’s National Institute of Standards and Technology. In December 2002, we reported on agency officials’ and union representatives’ views regarding various issues related to flexibilities. According to the agency officials and union representatives we interviewed, existing flexibilities that are most effective in managing the workforce are work-life policies and programs, such as alternative and flexible work schedules, transit subsidies, and child care assistance; monetary recruitment and retention incentives, such as recruitment bonuses and retention allowances; special hiring authorities, such as student employment and outstanding scholar programs; and incentive awards for notable job performance and contributions, such as cash and time-off awards. Agency and union officials also identified five categories of additional human capital flexibilities as most helpful if authorized for their agencies: (1) more flexible pay approaches, (2) greater flexibility to streamline and improve the federal hiring process, (3) increased flexibility in addressing employees’ poor job performance, (4) additional workforce restructuring options, and (5) expanded flexibility in acquiring and retaining temporary employees. Furthermore, we reported that the agency managers and supervisors and human resources officials we interviewed generally agreed that additional human capital flexibilities could be authorized and implemented in their agencies while also ensuring protection of employees’ rights. Union representatives, however, expressed mixed views on the ability of agencies to protect employee rights with the authorization and implementation of additional flexibilities. Specifically, several union representatives said that managers could more easily abuse their authority when implementing additional flexibilities, and that agency leaders often do not take appropriate actions in dealing with abusive managers. Based on our interviews with human resources directors from across the federal government and our previous human capital work, we also reported on six key practices that agencies should implement to use human capital flexibilities effectively. Figure 1 identifies these key practices. Lastly, also in our December 2002 report, we noted that agency and union officials identified several significant reasons why agencies have not made greater use of the human capital flexibilities that are available to them. These reported barriers that have hampered agencies in maximizing their use of available flexibilities included: agencies’ weak strategic human capital planning and inadequate funding for using these flexibilities given competing priorities; managers’ and supervisors’ lack of awareness and knowledge of the flexibilities; managers’ and supervisors’ belief that approval processes to use specific flexibilities are often burdensome and time-consuming; and managers’ and supervisors’ concerns that employees will view the use of various flexibilities as inherently unfair, particularly given the common belief that all employees must be treated essentially the same regardless of job performance and agency needs. As noted in our report, the recently enacted Homeland Security Act of 2002 provided agencies with a number of additional flexibilities relating to governmentwide human capital management. For example, agencies will now be permitted to offer buyouts to their employees without the requirement to reduce their overall number of employees. The legislation also permits agencies to use a more flexible approach in the rating and ranking of job candidates (categorical rating) during the hiring and staffing process. The Act also created chief human capital officer (CHCO) positions for the largest federal departments and agencies, an interagency CHCO Council, and a requirement that agencies discuss their human capital approaches in their annual performance plans and reports under the Government Performance and Results Act. OPM deems that its role related to human capital flexibilities is broader than merely articulating polices that federal agencies use in managing their workforces. OPM sees that it has an important leadership role in identifying, developing, and applying human capital flexibilities across the federal government. As such, OPM has several initiatives underway with the goal of assisting federal agencies in using available flexibilities and identifying additional flexibilities that might be beneficial for agencies. One of OPM’s primary functions related to assisting agencies in the use of human capital flexibilities is to serve as a clearinghouse for information through a variety of sources, including its Web site. For example, OPM prepared and posted on its Web site a handbook on personnel flexibilities generally available to federal agencies. This handbook, Human Resources Flexibilities and Authorities in the Federal Government, describes the flexibilities that agencies can use to manage their human capital challenges and provides information about the statutory and regulatory authorities for the specific flexibilities. OPM has also established Web-based clearinghouses of information on best practices in two areas of human resources management: employee performance management and accountability. OPM said that it has received positive feedback on these two Web-based clearinghouses and that many of OPM’s customers have said that the information has been useful to them in researching information and when redesigning human resources-related programs. OPM is also developing a Preferred Practices Guide that it said would highlight efficient and effective hiring practices using existing hiring flexibilities. To assist in developing this guide, OPM in July 2002 asked federal human resources directors to share information with OPM about their improved results in areas related to hiring by using newly developed practices, strategies, and methods that could assist other agencies in addressing similar challenges. According to OPM, the contents of this Web- based document will likely parallel the steps of the federal hiring process and encompass areas such as workforce planning, recruitment, assessment, and retention. The guide is also expected to include actual examples of agency hiring practices, such as the Emerging Leaders Program, a 2-year career development intern program created by the Department of Health and Human Services, and the Recruitment “Timely Feedback” Executive Tool, a monthly reporting and accountability system for gauging progress on recruiting initiatives that was established at the Social Security Administration. This Preferred Practices Guide, which OPM plans to post on its Web site in early 2003, would complement other ongoing OPM hiring-related efforts to encourage agencies to (1) provide interested persons with timely and informed responses to questions about the federal recruiting process, (2) develop clear and understandable job announcements, and (3) provide job applicants with regular updates on the status of their applications as significant decisions are reached. OPM has also issued a report entitled Demonstration Projects and Alternative Personnel Systems: HR Flexibilities and Lessons Learned, which contains lessons learned about implementing change to improve federal human capital management. According to OPM, these lessons learned are based on the testing of several personnel flexibilities in a wide variety of demonstration projects and alternative personnel systems at federal agencies over the past 20 years. OPM said that agency officials from the various projects collaborated with OPM staff in developing the report. The lessons learned in OPM’s report are similar to the key practices that we recently reported on for effectively using human capital flexibilities. OPM has also committed the assistance of its various experts to help agencies with human capital issues and challenges, including use of the various flexibilities available to agencies. OPM has established a human capital team of desk officers who serve as liaisons with agencies and who are to work closely with the agencies to help them in responding to the President’s Management Agenda. For some agencies with less planning and actions on strategic human capital management, these desk officers provide coaching and assistance and establish contacts with OPM’s program office experts. OPM said that when working with their assigned agency representatives, the desk officers take full advantage of all available OPM resources, including clearinghouse information, to help agencies identify available flexibilities. For example, OPM said that its desk officer for the Department of Education fielded an inquiry that led to on-site assistance in the planning and implementation of a demonstration project for that department. OPM has also formed “strike force teams,” created on an ad hoc basis, to provide expedited service to agencies with critical, time-sensitive human capital needs. These strike force teams are to serve a single focal point through which agencies can get assistance and advice on a wide range of topics and issues, including the implementation of human capital flexibilities. OPM has created strike force teams for several agencies, including the Department of Housing and Urban Development, the Department of Justice (DOJ), and the Transportation Security Administration. For example, at the request of the Assistant Attorney General, a strike force team worked with DOJ human resources staff to develop and present a briefing on human resource flexibilities for DOJ political appointees. OPM is also working jointly with the new Department of Homeland Security to prescribe regulations for the department’s human resources management system. OPM also holds conferences, training sessions, and other meetings to share information with agency officials, including material on the availability of flexibilities. For example, OPM conducts an annual conference to provide federal managers and human resources practitioners with updates and other information about the federal compensation environment, including topics such as pay and leave administration, performance management, position classification, and efforts to improve the compensation tools available to support agency missions. As an example of its training function, OPM, in collaboration with OMB, presented a half day of training on personnel authorities available to agencies as part of transition training for new political appointees. OPM said that it also held one-on-one meetings with more than 30 agencies to discuss telework, learn about agency initiatives in this area, and find out how OPM can assist agencies in expanding telework opportunities. In addition, OPM has realigned its own organizational structure and workforce. OPM’s goal was to create a new, flexible structure that will “de- stovepipe” the agency; enable it to be more responsive to its primary customers, federal departments and agencies; and allow it to focus on the agency’s core mission. For example, OPM has decided to put its various program development offices under the control of one associate director and its product and services functions under another associate director to ensure that it appropriately and efficiently responds to its customers. Effective implementation of OPM’s latest organizational and workforce realignment will be crucial to maximizing its performance as the federal government’s human capital leader, assuring its own and other agencies’ accountability, and ultimately achieving its goals. OPM has furthermore initiated some efforts to assist agencies in identifying additional flexibilities that might be effective in helping the agencies manage their workforces. For example, OPM said that it has actively supported passage of proposed legislation that would enhance human capital flexibilities and provide more latitude for flexible implementing regulations. OPM told us, for example, that it developed and drafted a significant portion of the proposed Managerial Flexibility Act of 2001, a bill intended to give federal managers tools and flexibility in areas such as personnel, budgeting, and property management and disposal. This proposed legislation did not pass the 107th Congress, although several related provisions were included in the recently enacted Homeland Security Act of 2002. OPM officials told us that these legislative efforts should serve as evidence that OPM can and does identify areas where changes to statute would provide more flexibility to agencies. Moreover, one component of the proposed legislation, which was not enacted, includes streamlining the process for implementing demonstration projects and creating a mechanism to export tested innovations to other federal organizations. OPM believes that to get a better return on investment from years of demonstration project evaluations, a method should exist—short of separate legislation—for converting successfully tested alternative systems and flexibilities to permanent programs and for making them available to other agencies. OPM has taken other actions to assist agencies in identifying additional flexibilities that they could use to manage their workforces. For example, in its HR Flexibilities and Lessons Learned report, OPM identified personnel flexibilities that have been tested and evaluated through demonstration projects or alternative personnel systems over the last 20 years. OPM said that during the development of the Managerial Flexibility Act, the President’s Management Council requested information on existing flexibilities and that OPM created its report in response to that request in an effort to catalogue these flexibilities in one document. OPM said that some of the flexibilities catalogued in its report have been thoroughly tested over time in a variety of environments, while others have more limited agency applicability and thus have more limited data to show their success. Some of these flexibilities outlined in the report correspond to the types of flexibilities that agency and union officials told us could be beneficial for their agencies, such as broadbanded pay systems, categorical rating for hiring, and expanded probationary periods for new employees. OPM recognizes that additional efforts are needed to address key personnel challenges within the federal workforce, particularly in the areas of pay and hiring. In April 2002, OPM released a report that presents the case for the need for reform of the white-collar federal pay system under which 1.2 million General Schedule federal employees are paid. Without recommending a specific solution, OPM’s report stresses the importance of developing a contemporary pay system that is more flexible, market- sensitive, and performance-oriented as well as a better tool for improving strategic human capital management. Also, OPM said that in the coming months it will identify additional projects and proposals that will address systemic problems associated with the hiring process. These additional initiatives will include deploying competency-based qualifications, improving entry-level hiring, and updating and modernizing exam scoring policy. According to OPM officials, as it moves forward on these pay and hiring initiatives, OPM will assess what additional flexibilities and tools might be needed for agencies as they look for ways to better manage their workforces. Although federal agencies have the primary responsibility to maximize their use of human capital flexibilities, OPM also plays a key role in facilitating agencies’ use of existing flexibilities as well as identifying new personnel authorities that agencies might need in managing their workforces. The views of agencies’ human resources directors can help to provide indications of the progress that OPM has made in its important role related to human capital flexibilities. We therefore surveyed the human resources directors for the 24 largest departments and agencies in the federal government to obtain their views on OPM’s role related to flexibilities. In the surveys we conducted in the fall of 2001 and again in the fall of 2002, the human resources directors for the largest departments and agencies gave mixed views on their satisfaction with OPM’s role in assisting their agencies in using available human capital flexibilities. Figure 2 depicts the directors’ responses on this issue for both 2001 and 2002. In 2002, 7 of the 24 responding directors said that they were satisfied to “little or no” or “some” extent regarding OPM’s role in assisting their agencies in using available flexibilities. Conversely, 7 of the 24 responding directors in 2002 said that they were satisfied to a “great” or “very great” extent with OPM’s role in assisting their agencies with available flexibilities. Overall for 2002 on this issue, the average satisfaction level of the human resource directors was unchanged between 2001 and 2002. Specifically, for 2002 our survey showed that for five agencies, the director’s level of satisfaction was greater than the level of satisfaction for that agency’s human resources director from the previous year; for five agencies, the directors’ level of satisfaction was less than the level of satisfaction for that agency’s human resources director from the previous year. In our interviews with the human resources directors regarding the issue of OPM’s role in assisting agencies in the use of available flexibilities, several of the directors said that OPM communicates well with agencies through e- mails, meetings, workgroups, and its Web site and has taken some action to disseminate information about existing flexibilities. One director, for example, commended OPM for effectively using its Web site to share information about what flexibilities are generally available to agencies. Another director praised OPM for the positive actions it had taken with respect to facilitating work-life programs for federal employees. However, directors frequently commented that OPM often puts its own restrictive interpretation on the use of flexibilities, surrounding them with too many regulations that can make their use unduly complicated and more difficult; regulations and guidance on implementing the Federal Career Intern Program were mentioned frequently in this regard, for example. Several directors argued that their agencies should be able to implement human capital flexibilities in the most flexible fashion, not the most restrictive. One director expressed the opinion that, although the upper management of OPM may support using flexibilities, middle management and lower- level staff within the agency seemed resistant to change and sometimes hampered the efforts of agencies in the use of flexibilities. This director wanted to see OPM play a more facilitative and consultative role, working in concert with agencies. In addition, directors from several agencies stated that OPM needs to host additional forums to share experiences on the use of existing human capital flexibilities, with OPM more fully serving as a clearinghouse in making flexibilities and effective practices more widely known to agencies. While the human resources directors we surveyed gave mixed views on their satisfaction with OPM’s role related to available flexibilities, the directors were less satisfied with OPM’s role in assisting agencies in identifying additional human capital flexibilities that could be authorized. However, the directors’ extent of satisfaction on this issue, as measured in our survey, was greater in 2002 than in 2001. Figure 3 depicts the directors’ responses on this issue for both 2001 and 2002. In 2002, 11 of the 24 responding directors said that they were satisfied to “little or no” or “some” extent regarding OPM’s role in identifying additional flexibilities that could be authorized for agencies. Conversely, 6 of the 24 responding directors said that they were satisfied to a “great” or “very great” extent regarding OPM’s role in identifying additional flexibilities. For seven agencies, the director’s level of satisfaction was greater in 2002 than the level of satisfaction for that agency’s human resources director from the previous year; for four agencies, the director’s level of satisfaction was less than the level of satisfaction for that agency’s human resources director from the previous year. One human resources director we interviewed said, for example, that OPM has done a commendable job of listening to agencies’ concerns about the need for additional flexibilities, particularly through the Human Resources Management Council, an interagency organization of federal human resources directors. However, several directors said that OPM needs to play a more active role in identifying flexibilities that agencies might use to manage their workforces. Several human resources directors said that OPM should be doing more to conduct or coordinate personnel management research on additional flexibilities that might prove effective for agencies to use in managing their workforces. Several of these directors also told us that OPM should work more diligently to support efforts in authorizing and implementing governmentwide those innovative human capital practices and flexibilities that have been sufficiently tested and deemed to be successful, such as those tested in OPM-sponsored personnel demonstration projects. According to many of the human resources directors we interviewed, OPM needs to play a larger role in acting as a change agent to get human capital legislation passed and implemented. While recognizing that OPM cannot promote legislation that is inconsistent with the administration’s views of the civil service, human resources directors said that OPM should be the policy leader in the area of human capital and, as the leader, should push harder for major civil service reform. In the human resources directors’ opinions, OPM needs to look at personnel reforms in a new, open, and objective way and develop changes to current laws and regulations to ensure that agencies can effectively obtain and manage their workforces. In addition, some directors expressed frustration about the lack of coordination between OPM and OMB in responding to OMB’s request for agencies to complete workforce planning and restructuring analyses. Further, they said that OPM, OMB, and Congress need better communication and coordination in developing budgets and recognizing the costs involved in using human capital flexibilities. Assisting federal agencies in using available flexibilities and in identifying additional flexibilities is an important part of OPM’s overall goal of aiding agencies in adopting human resources management systems that improve their ability to build successful, high-performance organizations. In testimony before Congress in February of 2001, we suggested two areas in which OPM could make substantial additional contributions in addressing the federal government’s human capital challenges. The first was in reviewing existing OPM regulations and guidance to determine their continued relevance and utility by asking whether they provide agencies with the flexibilities they need while incorporating protections for employees. The second area was in making existing human capital flexibilities and effective practices more widely known to the agencies, and in taking fullest advantage of OPM’s ability to facilitate information-sharing and outreach to human capital managers throughout the federal government. Although OPM has taken concerted action in some areas to assist agencies in using flexibilities, OPM has taken limited actions related to these two areas. Moreover, OPM could do more to assist agencies in identifying additional human capital flexibilities that could be authorized and also be actively working to build consensus to support related legislation that might be needed. Greater attention to these areas could allow OPM to more fully fulfill its leadership role to assist agencies in identifying, developing, and applying human capital flexibilities across the federal government. As we noted in the previous testimony, as OPM continues to move from “rules to tools,” its more valuable contributions in the future will come less from traditional compliance activities than from its initiatives as a strategic partner to agencies. Just as agencies need to streamline and improve their own internal administrative processes to effectively use flexibilities, OPM similarly needs to ensure that its regulations and guidance provide adequate flexibility while also recognizing the importance of ensuring fairness and incorporating employee protections. As we noted in our December 2002 report, if senior managers within agencies want supervisors to make effective use of flexibilities, supervisors must view agencies’ internal processes to use the flexibility worth their time compared to the expected benefit to be gained in implementing the flexibility. Similarly, if OPM wants agencies to make effective use of flexibilities, agencies must view OPM’s regulatory requirements for using the flexibility worth the expected benefits that the flexibility would provide. In comments that it provided in response to our December 2002 report, OPM said that it is undertaking a review of its regulations and guidance. According to OPM, the purpose of this regulatory review, which began in December 2001, is to restate regulations in plainer language wherever possible to eliminate redundant or obsolete material and to revise regulations to make them more easily usable by a variety of readers. OPM said that because it has focused chiefly on making the regulations as readable as possible, rather than making substantive changes, the agency did not anticipate making changes to provide additional flexibility as part of this effort. OPM said that its Office of General Counsel, which is leading the regulatory review, has been carrying it out by working with OPM’s program offices to establish basic protocols, selecting provisions that require elimination or redrafting, soliciting drafts from the offices, and then reviewing and revising these drafts in conjunction with the OPM program staff. OPM said that it amends its regulations to provide flexibility, on an as-needed basis, in the ordinary course of carrying out the OPM Director’s policies. In response to our request for examples of regulations that it has redrafted under this effort, OPM said it was reviewing all of the regulations in chapter I of Title 5 of the U.S. Code of Federal Regulations but that it was not yet in a position to supply examples because it had recently begun to submit some of the redrafted material to OMB for clearance. Nonetheless, a report we recently issued included an example of where OPM revised regulations to, at least in part, provide additional flexibility to agencies. In the fall of 2000, OPM amended regulations on evaluating the job performance of senior executives within the federal government. OPM’s goal in developing these regulations was to help agencies hold their senior executives accountable by increasing agency flexibility, focusing on results, emphasizing accountability, and improving links between pay and performance. These changes were to balance the agencies’ desire for maximum flexibility with the need for a corporate approach that safeguards merit principles. OPM’s changes to the regulations included paring back many of the previous requirements to those in statute to give agencies more flexibility to tailor their performance management systems to their unique mission requirements and organizational cultures. OPM made these regulatory changes in part because performance management systems have tended to focus on process over results. Because providing additional flexibility has not been a fundamental purpose of its current regulatory review, OPM is not taking advantage of a crucial opportunity to provide additional flexibility, where appropriate, on a systematic basis rather than through a piece-meal, ad hoc approach. Human resources directors we interviewed often said that OPM should provide agencies with greater delegation to carry out their human capital programs. For example, some directors commented that agencies should be able to waive the annuity offsets for reemployed annuitants without authority by OPM. Some directors also told us that OPM should allow agencies to extend the probationary periods for newly hired employees beyond the standard 1-year period. Directors also said that OPM’s guidance for implementing human capital programs could sometimes be overly restrictive and burdensome. For example, some directors said that OPM’s internal approval and evaluation processes for personnel demonstration projects needed to be streamlined to make the program more practical. One director told us, for instance, that her agency had considered applying as a demonstration project but demurred because officials at her agency viewed OPM’s requirements as too burdensome. It is important to note that human resources directors we interviewed also expressed interest in gaining increased flexibilities that would require changes in federal statute and thus are outside of OPM’s authority to change independently. Directors commented on such areas as decreasing some of the limitations and parameters of allowable personnel demonstration projects. As we noted in recent testimony, OMB and the Congress have key roles in improving human capital management governmentwide, including the important responsibility of determining the scope and appropriateness of additional human capital flexibilities agencies may seek through legislation. In recent testimony on using strategic human capital management to drive transformational change, we noted the potential benefits of providing additional flexibility in the government’s personnel systems by suggesting, for example, that the Congress may wish to explore the benefits of allowing agencies to apply to OPM on a case-by-case basis (i.e., case exemptions) for authority to establish more flexible pay systems for certain critical occupations or, even more broadly, allowing OPM to grant governmentwide authority for all agencies (i.e., class exemptions) to use more flexible pay systems for their critical occupations. In our December 2002 report on human capital flexibilities, we noted that one of the key factors for effectively using flexibilities is educating agency managers and employees on the availability of these flexibilities as well as about the situations where the use of those flexibilities is most appropriate. Ultimately the flexibilities within the personnel system are only beneficial if the managers and supervisors who would carry them out are aware of not only their existence but also the best manner in which they could be used. With a comprehensive clearinghouse and broad information sharing about flexibilities, OPM can greatly assist agencies in educating their managers and supervisors as well as preparing their human capital managers for their consultative role regarding the best manner in which the full range of flexibilities should be implemented. This information would also be useful to support OPM’s oversight of agencies’ use of personnel flexibilities. OPM has not, however, fully maximized its efforts to make human capital flexibilities and effective practices more widely known to agencies. Although OPM has made efforts to inform agencies of what flexibilities are generally available and why their use is important, OPM has yet to take full advantage of its ability to compile, analyze, and distribute information about when, where, and how the broad range of flexibilities are being used, and should be used, to help agencies meet their human capital management needs. Human resources directors we interviewed frequently brought up that OPM needs to take further determined action on this issue. One human resources director said, for example, that OPM should be setting benchmarks and identifying best practices for using flexibilities. Another director added that OPM should provide agencies with different scenarios of how flexibilities can be used. Another director commented that OPM needs to develop more educational and training aids to inform agency officials about these best practices. Yet another director added that OPM should evaluate the effectiveness of many different flexibilities and share the results with other agencies. OPM officials told us that they do not generally know which federal agencies have done effective jobs in using specific flexibilities nor which practices these agencies employed to produce effective results. OPM could use its outreach and information-sharing efforts to more thoroughly identify which federal agencies are specifically using the various flexibilities in effective ways and reporting on the particular practices that these agencies are using to implement their flexibilities. Examination of information from OPM’s database of federal civilian employees, the Central Personnel Data File, could help OPM in such analysis, including identifying possible correlations between an agency’s use of flexibilities and factors such as employees’ occupations, grade levels, and duty stations. This compilation, analysis, and distribution of information could also include research OPM conducts or sponsors that may shed light on effective practices for implementing existing flexibilities. OPM could also use this analysis of agencies’ use of flexibilities in its oversight role. OPM’s new Human Capital Assessment and Accountability Framework provides guidance for agencies to maximize their human capital management and is being used by OPM to evaluate agencies’ progress. For example, under one of the framework’s six standards for success, key questions to consider include the following: Does the agency use flexible compensation strategies to attract and retain quality employees who possess mission-critical competencies? Does the agency provide work/life flexibilities, facilities, services, and programs to make the agency an attractive place to work? The information gathered on personnel flexibilities could assist OPM in its assessment of this standard. In addition, OPM has the responsibility to not only review whether agencies are maximizing the use of personnel flexibilities, but also, along with agencies, ensure that flexibilities are being used fairly and are consistent with the merit principles and other national goals and include appropriate safeguards. The human resource directors we interviewed said that OPM could do more to assist agencies in identifying additional human capital flexibilities that could be authorized. The information gathered on agencies’ use of flexibilities could also be used to gain greater insight into agencies’ needs related to additional flexibilities that might be helpful for agencies’ management of their human capital. In our discussions with OPM about its efforts in assisting agencies with flexibilities, OPM officials told us that it was not feasible to identify or track all agency requests for additional flexibilities because such requests are received throughout the organization and range from casual questions to formal requests for exceptions or demonstration projects. Tracking such requests, however, could assist OPM in gaining a clearer picture of agency concerns and requests for additional tools and flexibilities as well as more comprehensively documenting agency needs for the benefit of policymakers as statutory and regulatory changes are proposed and considered. The recently legislated Chief Human Capital Officers Council, chaired by the OPM Director, could also aid in disseminating information about effective human capital practices. We have reported that the use of the similar interagency councils of chief financial officers and chief information officers to, among other things, share information about effective practices, was one of the major positive public management developments over the past decade. Once OPM determines that additional flexibilities are needed, it could actively work to build consensus to support needed legislation. As noted earlier, OPM actively supported legislation in the last Congress to authorize additional flexibilities to agencies. Specifically, OPM drafted and supported a significant portion of the proposed Managerial Flexibility Act of 2001. OPM could continue to support such legislation and identify additional personnel flexibilities that are needed. The ineffective use of flexibilities can significantly hinder the ability of federal agencies to recruit, hire, retain, and manage their human capital. To deal with their human capital challenges, it is important for agencies to assess and determine which human capital flexibilities are the most appropriate and effective for managing their workforces. As we previously reported, to ensure more effective use of human capital flexibilities, it is important that agencies (1) plan strategically and make targeted investments, (2) ensure stakeholder input in developing policies and procedures, (3) educate managers and employees on the availability and use of flexibilities, (4) streamline and improve administrative processes, (5) build accountability into their systems, and (6) change their organizational cultures. By more effectively using flexibilities, agencies would be in a better position to manage their workforces, assure accountability, and transform their cultures to address current and emerging demands. OPM provides the necessary link to agencies to accomplish their goals by making existing human capital flexibilities more widely known and easier to use and by identifying additional flexibilities that can help agencies better manage their workforces. While it has taken some actions to inform agencies about what flexibilities are generally available and why their use is important, OPM has significant opportunities to strengthen its role as its moves forward to assist agencies as an integral part of the administration’s human capital transformation efforts. By taking hold of these opportunities, OPM could more successfully aid agencies with more comprehensive information about the tools and authorities available to them for managing their workforce and the most effective ways that these flexibilities can be implemented. The new CHCO Council could be an excellent vehicle to assist in these areas. Given the importance of the effective use of flexibilities as a critical part of improved human capital management within the federal government and consistent with OPM’s ongoing efforts in this regard, we recommend that the Director of OPM take the following actions. Review existing OPM regulations and guidance to determine whether they provide agencies with needed flexibility while also incorporating protection for employees. Work with and through the new Chief Human Capital Officers Council to more thoroughly research, compile, and analyze information on the effective and innovative use of human capital flexibilities and more fully serve as a clearinghouse in sharing and distributing information about when, where, and how the broad range of flexibilities are being used, and should be used, to help agencies meet their human capital management needs. Continue to identify additional personnel flexibilities needed to better manage the federal workforce and then develop and build consensus for needed legislation. OPM commented on a draft of this report and agreed with the conclusions and recommendations. OPM pointed out that in future studies that address OPM leadership issues, in addition to surveying agency human resource directors, we should also survey agency chief operating officers. OPM believed that the chief operating officers have the “best perspective and the widest array of information about recruitment and retention issues.” We agree that such future studies would benefit from the perspectives of chief operating officers. OPM’s complete comments are shown in appendix II. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies to the Chairman, Senate Committee on Governmental Affairs, and the Chairman and Ranking Minority Member, House Committee on Government Reform, and other interested congressional parties. We will also send copies to the Director of OPM. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me on (202) 512- 6806. Key contributors to this report are listed in appendix III. This report is the second of two reports responding to a request from the Senate Committee on Governmental Affairs and two of its subcommittees regarding the use of human capital flexibilities in managing agency workforces. The objectives of our first report, issued in December 2002, were to provide information on (1) actions that federal agencies can take to more effectively implement human capital flexibilities and (2) agency and union officials’ views related to the use of human capital flexibilities. The objectives of this report were to provide information on actions that the Office of Personnel Management (OPM) has taken to facilitate the effective use of human capital flexibilities throughout the federal government as well as what additional actions OPM might take in this regard. Our work in responding to this request was conducted in two phases. Phase one of our work primarily involved surveying and interviewing the human resources directors from the 24 largest departments and agencies. Phase two of our work involved conducting semi-structured interviews with managers and supervisors, human resources officials, and local union representatives from seven federal agencies we selected for more detailed review. This report was developed primarily from our work during phase one. To respond to the objectives of this report, we gathered information from a variety of sources using several different data collection techniques. During phase one of our work, we interviewed representatives from OPM, the federal government’s human resources agency; Merit Systems Protection Board, a federal agency that hears and decides civil service cases, reviews OPM regulations, and conducts studies of the federal government’s merit systems; and the National Academy of Public Administration, an independent, nonpartisan, nonprofit, congressionally chartered organization that assists federal, state, and local governments in improving their performance. We interviewed representatives of these three organizations to gather background information on the federal government’s experiences with and use of human capital flexibilities and OPM’s role in assisting agencies in their use of personnel flexibilities. We also reviewed numerous reports issued by these organizations on governmentwide human capital issues, the use of various human capital flexibilities in federal agencies, and the role of OPM. In addition, we reviewed previous GAO reports on a broad range of human capital issues. In the fall of 2001, we also gathered information for our objectives by conducting semistructured interviews with the human resources directors of the 24 largest federal departments and agencies. To produce a general summary of the human resources directors’ views, we first reviewed their responses to the open-ended questions we had posed to them. Based on our analysis of those responses, we identified a set of recurring themes and then classified each director’s responses in accord with these recurring themes. At least two staff reviewers collectively coded the responses from each of the 24 interviews and the coding was verified when entered into a database we created for our analysis. In addition, prior to our interviews with the 24 human resources directors, each of the 24 officials completed a survey of seven closed-ended questions dealing with agencies’ use of human capital flexibilities, OPM’s role related to these flexibilities, and the federal hiring process. To update this information, we resurveyed the 24 individuals serving in the agencies’ human resources director positions in the fall of 2002, asking the same seven questions. During the period between the 2001 and 2002 surveys, 16 of the 24 individuals serving in the positions of human resources directors had changed. Table 1 shows the questions from these surveys along with a summary of the answers provided. For each item, respondents were to indicate the strength of their perception on a 5-point scale, from “little or no extent” to “very great extent.” Our audit work on both phases of our review was done from May 2001 through November 2002. We conducted our audit work in accordance with generally accepted government auditing standards. In addition to the persons above, K. Scott Derrick, Charlesetta Bailey, Tom Beall, Ridge Bowman, Karin Fangman, Molly K. Gleeson, Judith Kordahl, Shelby D. Stephan, Gary Stofko, Mike Volpe, and Scott Zuchorski made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
Congressional requesters asked GAO to provide information on actions that the Office of Personnel Management (OPM) has taken to facilitate the effective use of human capital flexibilities throughout the federal government and what additional actions OPM might take in this regard. These flexibilities represent the policies and practices that an agency has the authority to implement in managing its workforce. OPM Has Taken Several Actions to Assist Agencies: OPM has an important leadership role in identifying, developing, applying, and overseeing human capital flexibilities across the federal government. OPM has taken several actions to assist federal agencies in effectively using the human capital flexibilities that are currently available to agencies. For example, OPM has issued a handbook for agencies that identifies the various flexibilities available to help manage their human capital. Also, OPM has initiated some efforts to assist agencies in identifying additional flexibilities that might be helpful to agencies in managing their workforces. Human Resources Directors Gave Mixed Views on OPM's Role: To yield indications of the progress that OPM has made in its important role related to assisting agencies in the use of human capital flexibilities, GAO surveyed the human resources directors of the federal government's 24 largest departments and agencies in fall of 2001 and again in the fall of 2002. There was little change in the directors' level of satisfaction with OPM's role in assisting agencies in using available flexibilities, which remained mixed. For example, one director said OPM had effectively facilitated the use of work-life flexibilities, but others thought that OPM had placed its own restrictive interpretation on the use of other personnel flexibilities. The level of satisfaction with OPM's role in identifying additional flexibilities was greater in 2002 than in 2001, but still remained below the satisfaction level for assistance with existing flexibilities. Several directors said that OPM had not worked diligently enough in supporting authorization of governmentwide use of new flexibilities that have been sufficiently tested and deemed successful. Additional OPM Actions Could Further Facilitate Use of Flexibilities: Although OPM has recently taken numerous actions, OPM could more fully meet its leadership role to assist agencies in identifying, developing, and applying human capital flexibilities across the federal government. In its ongoing internal review of its existing regulations and guidance, OPM could more directly focus on determining the continued relevance and utility of its regulations and guidance by asking whether they provide the flexibility that agencies need in managing their workforces while also incorporating protections for employees. In addition, OPM can maximize its efforts to make human capital flexibilities and effective practices more widely known to agencies by compiling, analyzing, and sharing information about when, where, and how the broad range of flexibilities are being used, and should be used, to help agencies meet their human capital management needs. OPM also needs to more vigorously identify new flexibilities that would help agencies better manage their human capital and then work to build consensus for the legislative action needed.
Within DHS’s Immigration and Customs Enforcement (ICE) organization, the Student and Exchange Visitor Program (SEVP) is responsible for certifying schools to accept foreign students in academic and vocational programs and for managing SEVIS. Schools and exchange programs were required to start using SEVIS for new students and exchange visitors beginning February 15, 2003, and for all continuing students and exchange visitors beginning August 1, 2003. The following tables show the number of active students, exchange visitors, and institutions registered in SEVIS as of February 28, 2005. SEVP is also responsible for providing program policies and plans; performing program analysis; and conducting communications, outreach, and training. Regarding SEVIS, SEVP is responsible for identifying and prioritizing system requirements, performing system release management, monitoring system performance, and correcting data errors. The Office of Information Resource Management, also part of ICE, manages the information technology infrastructure (that is, hardware and system software) on which the SEVIS application software is hosted. It also manages the SEVIS Help Desk and the systems life cycle process for the system, including system operations and maintenance. The software for the SEVIS application runs on a system infrastructure that supports multiple DHS Internet-based applications. The infrastructure includes common services, such as application servers, Web servers, database servers, and network connections. SEVIS shares five application servers and two Web servers with two other applications. To assist system users, the SEVIS Help Desk was established, which provides three levels of support, known as tiers: ● Tier 1 provides initial end-user troubleshooting and resolution of technical problems. ● Tier 2 provides escalation and resolution support for Tier 1, and makes necessary changes to the database (data fixes). ● Tier 3 addresses the resolution of policy and procedural issues, and also makes data fixes. SEVP uses a contractor to operate Tiers 1 and 2. Both the contractor and the program office operate Tier 3. According to an SEVP official, contactor staff for Tiers 1 through 3 include the following: Tier 1 has 21 staff, Tier 2 has 6 staff, and Tier 3 has 13 staff. Data are entered into SEVIS through one of two methods: ● Real-time interface (i.e., an individual manually enters a single student/exchange visitor record) or ● Batch processing (i.e., several student/exchange visitor records are uploaded to SEVIS at one time using vendor-provided software or software created by the school/exchange visitor program). SEVIS collects a variety of data that are used by schools, exchange visitor programs, and DHS and State Department organizations to oversee foreign students, exchange visitors, and the schools and exchange visitor programs themselves. Data collected include information on students, exchange visitors, schools, and exchange visitor programs. For example, biographical information (e.g., student or exchange visitor’s name, place and date of birth, and dependents’ information), academic information (e.g., student or exchange visitor’s status, date of study commencement, degree program, field of study, and institution disciplinary action), school information (e.g., campus address, type of education or degrees offered, and session dates); exchange visitor program information (e.g., status and type of program, responsible program officials, and program duration). SEVIS data are also used by a variety of users. Table 3 provides examples of users and how each uses the data. In 2002 and 2003, when SEVIS first began operating and was first required to be used, significant problems were reported. For example, colleges, universities, and exchange programs could not gain access to the system, and when access was obtained, these users’ sessions would “time out” before they could complete their tasks. In June 2004, we reported that several performance indicators showed that SEVIS performance was improving. These indicators included system performance reports, requests for system changes to address problems, and feedback from educational organizations representing school and exchange programs. Each indicator is discussed below. Whether defined system requirements are being met is one indicator of system performance. In June 2004, we reported that performance reports showed that some, but not all, key system requirements were being measured, and that these measured requirements were being met. Table 4 shows examples of key system performance requirements. However, we also reported that not all key performance requirements were being adequately measured. For example, reports used to measure system availability measured the time that the system infrastructure was successfully connected to the network. While these reports can be used to identify problems that could affect the system availability, they do not fully measure SEVIS availability. Instead, they measure the availability of the communications software on the application servers. This means that the SEVIS application could still be unavailable even though the communications software is available. Similarly, program officials stated that they used a central processing unit activity report to measure resource usage. However, this report focuses on the shared infrastructure environment, which supports SEVIS and two other applications, and does not specifically measure SEVIS-related central processing performance. Program officials did not provide any reports that measured performance against other resource usage requirements, such as random access memory and network usage. Program officials acknowledged that some key performance requirements were not formally measured and stated that they augmented these formal performance measurement reports with other, less formal measures, such as browsing the daily Help Desk logs to determine if there were serious performance problems requiring system changes or modifications, as well as using the system themselves on a continuous basis. According to these officials, a combination of formal performance reports and less formal performance monitoring efforts gave them a sufficient picture of how well SEVIS was performing. Further, program officials stated that they were exploring additional tools to monitor system performance. For example, they stated that they were in the process of implementing a new tool to capture the availability of the SEVIS application, and that they planned to begin using it by the end of April 2004. However, unless DHS formally monitored and documented all key system performance requirements, we concluded that the department could not adequately assure itself that potential system problems were identified and addressed early, before they had a chance to become larger problems that could affect the DHS mission objectives that SEVIS supports. Another indicator of how well a system is performing is the number and significance of reported problems or requests for system enhancements. For SEVIS, a system change request (SCR) is created when a change is required to the system. Each of the change requests is assigned a priority of critical, high, medium, or low, as defined in table 5. Each change request is also categorized by the type, such as changes to correct system errors, enhance or modify the system, or improve system performance. In June 2004, we reported that the number of critical or high priority change requests that were created between January 2003 and February 2004 was decreasing. Similarly, we reported that the trends in the number of new change requests that were to correct system errors had decreased for that same period. Over this period, the number of corrective fixes requested each month between January 2003 and February 2004 decreased, with the most dramatic decrease in the first 7 months. Figure 1 shows the decreasing trend in SEVIS new corrective change requests between January 2003 and February 2004. A third indicator of performance is user feedback. According to representatives of educational organizations, overall SEVIS performance at the time of our report had improved since the system began operating and its use was required, and the program’s outreach and responsiveness were good. In addition, these representatives told us that they were no longer experiencing earlier reported problems, which involved user access to the system, the system’s timing out before users could complete their tasks, and merging data from one school or exchange visitor program with data from another. However, seven new problem types were identified by at least 3 of the 10 organizations, and three of the seven problems were related to Help Desk performance. Table 6 shows the problems and the number of organizations that identified them. At the time of our report, DHS had taken a number of steps to identify and solve system problems, including problems identified by educational organizations. In particular, DHS steps to identify problems included ● holding biweekly internal performance meetings and weekly technical meetings, ● holding biweekly conference calls with representatives from ● establishing special e-mail accounts to report user problems, and ● having user groups test new releases. Further, DHS cited actions intended to address six of the seven types of problems identified by the educational organizations. These included releases of new versions of SEVIS and increases in Help Desk training and staffing. These officials also stated that they were evaluating potential solutions to the remaining problem. Table 7 shows the problem types, the number of organizations that identified them, and DHS’s actions taken to address each. Despite DHS actions, educational organizations told us that some problems persisted. For example: ● Although the program office increased Help Desk staffing in March 2003, representatives from seven organizations stated that slow Tier 2 and 3 Help Desk responses were still a problem. In response, program officials stated that the majority of calls handled by Tiers 2 and 3 involve data fixes that are a direct result of end-user error, and that fixing them is sometimes delayed until end-users submit documentation reflecting the nature of the data fix needed and the basis for the change. ● Although the program office began in June 2002 providing training to Help Desk staff each time a new SEVIS release was implemented, representatives from 5 of the 10 organizations stated that the quality of the Help Desk’s response to technical and policy questions remained a problem. According to program officials, Help Desk response is complicated by variations in user platforms and end- user knowledge of computers. The officials added that the program office is working to educate SEVIS users on the distinction between platform problems and problems resulting from SEVIS. Further, they said that Help Desk responses may be complicated by the caller’s failure to provide complete information regarding the problem. Program officials also stated that supervisors frequently review Help Desk tickets to ensure the accuracy of responses, and these reviews had not surfaced any continuing problems in the quality of the responses. Various legislation requires that a fee be collected from each foreign student and exchange visitor to cover the costs of administering and maintaining SEVIS, as well as SEVP operations. In 2004, we reported that 7 years had passed since collection of the fee was required, and thus millions of dollars in revenue had been and would continue to be lost until the fee was actually collected. We also reported that representatives of the educational organizations were concerned with the fee payment options being considered because the options were either not available to all students in developing countries, or they would result in significant delays to an already lengthy visa application and review process, and increase the risk that paper receipts would be lost or stolen. As we then reported, DHS’s submission of its fee collection rule went to the Office of Management and Budget in February 2004, and it received final clearance in May 2004. The final rule, which was effective on September 1, 2004, (1) set the fee at $100 for nonimmigrant students and exchange visitors and no more than $35 for those J-1 visa-holders who are au pairs, camp counselors, or participants in a summer work/travel program, and (2) identified options for students and exchange visitors to pay the fee, including ● by mail using a check or money order drawn on a U.S. bank and payable in U.S. dollars or ● electronically through the Internet using a credit card. According to DHS officials, another option for paying the SEVIS fee permits exchange visitor programs to make bulk payments to DHS on behalf of J visa-holders. To help strengthen SEVIS performance and address educational organizations’ concerns, our report recommended that DHS ● assess the extent to which defined SEVIS performance requirements are still relevant and are being formally managed; ● provide for the measurement of key performance requirements that are not being formally measured; ● assess educational organization Help Desk concerns and take appropriate action to address these concerns; and ● provide for the expeditious implementation of the results of the SEVIS fee rulemaking process. According to program officials, a number of steps have been taken relative to our recommendations, and other steps are under way. For example, program officials stated that they have established a working group to assess the relevance of the requirements in the SEVIS requirements document. The working group is expected to provide its recommendations for changing this document by the end of March 2005. The changed requirements will then form the basis for measuring system performance. Program officials also stated that they are in the process of selecting tools for monitoring system performance and have established a working group to define ways to measure SEVIS’s satisfaction of its two main objectives, relating to oversight and enforcement of relevant laws and regulations and to improvement in port of entry processing of students and visitors. In this regard, they said that they have begun to monitor the number of false positives between SEVIS and the Arrival Departure Information System to target improvements for future system releases. Program officials also reported that they are taking steps to address Help Desk concerns. For example, they said that they continue to hold bi-weekly meetings with educational organizations and directly monitor select Help Desk calls. They also said that Tier 1 Help Desk staffing recently increased by five staff, and the knowledge-based tool used by the Help Desk representatives to respond to caller inquiries had been updated, including ensuring that the tool’s response scripts are consistent with SEVP policy. Additionally, these officials stated that they are reaching out to the Department of State to more quickly resolve certain system data errors (commonly referred to as data fixes), and said that a process has been established to ensure that high-priority change requests are examined to ensure correct priority designation and timely resolution. As of January 1, 2005, SEVP also established new performance level agreements with its Help Desk contractor, and it has been receiving weekly Help Desk reports to monitor performance against these agreements. DHS also began collecting the SEVIS fee in September 2004. Additionally, it introduced another payment option, effective November 1, 2004, whereby students can pay the fee using Western Union. This method allows foreign students to pay in local currency, rather than U.S. dollars. Program officials also stated that DHS has developed a direct interface between the payment systems and SEVIS and the State Department’s Consolidated Consular Database (CCD). According to these officials, this allows the consular officer to verify without delay that the visa applicant has, in fact, paid the SEVIS fee before completing the visa issuance process. According to representatives of educational organizations, overall SEVIS performance continues to improve. We contacted 6 of the 10 organizations that were part of our 2004 report on SEVIS performance, and representatives for all six organizations told us that SEVIS performance has generally continued to improve. In addition, five of the organizations stated that there were no new system performance problems. All of the organizations stated that they did not have any concerns with the SEVIS fee implementation. However, most representatives stated that some previously reported problems still exist. For example, representatives from five of the six organizations stated that slow Tier 2 and 3 Help Desk responses in correcting errors in student and exchange visitor records were still a problem. Three representatives stated that these corrections can take months, and in some cases even years, to fix. Two of the three stated that this has a major impact on the individuals involved. One organization reported that some exchange visitors’ records have been erroneously terminated, and as a result, the visitors’ families are unable to join them in the United States until a data fix occurs. According to the representative, this creates a very difficult situation for the individuals and makes it difficult to retain them in their academic programs. A representative for another organization reported that two participants’ records erroneously indicate that they have violated their status as exchange visitors. Were these individuals to leave the country to visit their families before a data fix is made, they would be denied re-entry. In addition, representatives from three organizations stated that they were still experiencing problems with downloading and manipulating data from SEVIS. For example, one representative reported an inability to pull reports on the exact number of exchange visitors in its program and their status. This person expressed concern because DHS holds schools and programs accountable for tracking exchange visitors, but then does not give them the tools necessary to do so. Further, representatives from two organizations stated that they were still experiencing problems with incorrect Help Desk responses. For example, one representative reported that he was erroneously told by a Help Desk employee that there was no need to correct an individual’s record of training, yet another Help Desk employee correctly stated that a fix was needed and gave detailed instructions on how to make the correction. Last, representatives from all six organizations stated that there have been declines in international students and exchange visitors coming to the United States. However, representatives from four of the six stated that SEVIS was not a factor, while representatives from the remaining two stated that SEVIS was just one of many factors. Other factors cited as contributing to this decline, which are discussed in the following section, were a lengthy visa application process and increased competition by other countries for students and exchange visitors. A recent Council of Graduate Schools report indicates that foreign graduate student applications, admissions, and enrollments are declining. According to the report, international graduate applications to U.S. colleges and universities declined 28 percent from 2003 to 2004, resulting in an 18 percent fall in admissions and a 6 percent drop in enrollments for the same period. In addition, while 2005 data on admissions and enrollments were not yet available, the report cited a 5 percent decline in applications between 2004 and 2005. According to the report, the declines in 2004 and in 2005 were most prominent for students from China and India. It also noted that between 2004 and 2005 applications were unchanged from Korea and up 6 percent from the Middle East. The report attributes this decline to two factors: increasing capacity abroad and visa restrictions at home. According to the report, countries in Europe and Asia are expanding their capacity at the graduate level through government policy changes and recruitment of international students. At the same time, the report says that the U.S. government has tightened the visa process since September 11, 2001, inadvertently discouraging international graduate students through new security procedures and visa delays. The Council of Graduate Schools also recognized recent federal actions to improve the student visa process. These actions are directly related to our work on the State Department’s Visas Mantis program—an interagency security check aimed at identifying those visa applicants who may pose a threat to our national security by illegally transferring sensitive technology. The program often affects foreign science students and visiting scholars whose background or proposed activity in the United States could involve exposure to technologies that, if used against the United States, could potentially be harmful. In February 2004, we reported and testified that there were delays in the Visas Mantis program and interoperability problems between the State Department and the FBI that contributed to these delays and allowed Mantis cases to get lost. We determined that it took an average of 67 days for Mantis checks to be processed and for State to notify consular posts that the visa could be issued, and that many Visas Mantis cases had been pending 60 days or more. We also determined that consular staff at posts we visited were unsure whether they were contributing to waits because they lacked clear program guidance. Accordingly, we recommended that the State Department, in coordination with DHS and the FBI, develop and implement a plan to improve the Visas Mantis process. In February 2005, we reported that Visas Mantis processing times had declined significantly. For example, in November 2004, the average time was about 15 days, far lower than the average of 67 days that we reported previously. We also found that the number of Mantis cases pending more than 60 days has dropped significantly. Our report recognized a number of actions that contributed to these improvements and addressed other issues that science students and scholars face in traveling to the United States. These actions included adding staff to process Mantis cases; defining a procedure to expedite certain cases; providing additional guidance and feedback to consular posts; developing an electronic tracking system for Mantis cases; clarifying the roles and responsibilities of agencies involved in the Mantis process; reiterating State’s policy of giving students and scholars priority scheduling for interview appointments; and extending the validity of Mantis clearances. Although we also identified opportunities for further refinements to the Visas Mantis program, we believe that the actions outlined above should allow foreign science students and scholars to obtain visas more quickly and to travel more freely. We did not determine the effect of these actions on the overall volume of international students traveling to the United States. However, representatives from the academic and international scientific community have indicated that they also believe the actions will have a positive impact. For example, the Association of American Universities identified the extension of Mantis clearances as “a common-sense reform that removes an unnecessary burden that caused enormous inconvenience for thousands of international students and discouraged many more from coming here to study.” In closing, indications are that SEVIS performance has improved and continues to improve, as has visa processing for foreign science students and scholars. Moreover, recent SEVIS-related initiatives demonstrate program officials’ commitment to future improvements. This commitment is important because educational organizations continue to report some persistent system problems, primarily with respect to Help Desk responsiveness in making certain “data fixes.” These problems can create hardships for foreign students and exchange visitors that can potentially have unintended consequences relative to these foreign students and exchange visitors applying to and enrolling in U.S. learning institutions. Therefore, it is important for DHS to effectively manage SEVIS performance against mission objectives and outcomes, as well as against system requirements. To this end, we have made several recommendations to DHS concerning SEVIS performance management. Messrs. Chairmen, this concludes our statement. We would be happy to answer any questions that you or members of the subcommittees may have at this time. If you should have any questions about this testimony, please contact Randolph C. Hite at (202) 512-3439 or hiter@gao.gov, or Jess T. Ford at (202) 512-4128 or fordj@gao.gov. Other major contributors to this testimony included John Brummet, Barbara Collier, Deborah Davis, Jamelyn Payan, and Elizabeth Singer. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Student and Exchange Visitor Information System (SEVIS) is an Internet-based system run by the Department of Homeland Security (DHS) to collect and record information on foreign students, exchange visitors, and their dependents--before they enter the United States, when they enter, and during their stay. GAO has reported (GAO-04-690) that although the system had a number of performance problems during the first year that its use was required, several SEVIS performance indicators were positive at that time (June 2004). Nonetheless, some problems were still being reported by educational organizations. In addition, concerns have been raised that the number of international students and exchange visitors coming to the United States has been negatively affected by the U.S. visa process. Accordingly, the Congress asked GAO to testify on its work on SEVIS and related issues. This testimony is based on its June 2004 report, augmented by more recent GAO work, reports that we issued in February 2004 and 2005 on student and visiting scholar visa processing, and related recent research by others. Indications are that SEVIS performance has improved and continues to improve. In June 2004, GAO reported improvement based on several indicators, including reports showing that certain key system performance requirements were being met, trends showing a decline in new requests for system corrections, and the views of officials representing 10 educational organizations. DHS attributed this performance improvement to a number of actions, such as installation of a series of new software releases and increased Help Desk staffing and training. However, GAO also reported that several key system performance requirements were not being formally measured, so that DHS might not be able to identify serious system problems in time to address them before they could affect the successful accomplishment of SEVIS objectives. Further, some educational organizations were still experiencing problems, particularly with regard to Help Desk support. GAO also reported that educational organizations were concerned about proposed options for collecting SEVIS fees. Accordingly, it made recommendations aimed at improving system performance measurement and resolving educational organizations' Help Desk and fee concerns. Since June 2004, DHS reports that it has taken steps to address GAO recommendations, and in particular it has taken a number of actions to strengthen Help Desk support. Moreover, educational organizations generally agree that SEVIS performance has continued to improve, and that their past fee collection concerns have been alleviated. However, these educational organizations still cite residual Help Desk problems, which they believe create hardships for students and exchange visitors. Most of these organizations, however, do not believe that SEVIS is the reason for the declining number of international students and exchange visitors coming to the United States. These declining numbers were cited in a recent report by the Council of Graduate Schools, which describes declines in foreign graduate student applications, admissions, and enrollments between 2003 and 2004, and further declines in these applications between 2004 and 2005. The report attributes the decline to increased global competition and changed visa policies. In this regard, GAO recently reported on the State Department's efforts to address its prior recommendations for improving the Visas Mantis program (under which interagency security checks are performed to identify applicants who may pose a threat to national security by illegally transferring sensitive technology). According to this report, a combination of federal agency steps resulted in a significant decline in Visas Mantis processing times and in the number of cases pending more than 60 days. The Council of Graduate Schools' report also recognizes the recent Visas Mantis program changes as positive steps.
The CSBG program provides funds to state and local agencies to support efforts that reduce poverty, revitalize low-income communities, and lead to self-sufficiency among low-income families and individuals. CSBG dates back to the War on Poverty of the 1960s and 1970s, which established the Community Action program, under which the nationwide network of local community action agencies was developed. A key feature of Community Action was the direct involvement of low-income people in the design and administration of antipoverty activities through mandatory representation on local agency governing boards. The federal government had direct oversight of local agencies until 1981, when Congress created CSBG and designated states as the primary recipients. States subgrant funds to over 1,000 eligible local agencies that are primarily community action agencies. In order to ensure accountability, both federal and state program offices have oversight responsibilities, including on-site monitoring of grantees and subgrantees, following-up on monitoring findings, and providing technical assistance. OCS administers CSBG and is required by law to conduct on-site compliance evaluations of several states in each fiscal year, report to states on the results of these evaluations, and make recommendations for improvements. Upon receiving an evaluation report, states must submit a plan of action that addresses recommendations. In addition, OCS is required to annually report to Congress on the performance of the CSBG program, including the results of state compliance evaluations. For states to receive CSBG funding, they must submit, at least every 2 years, an application and plan to OCS stating their intention that funds will be used to, among other things, support activities to help families and individuals with the following: achieve self-sufficiency, find and retain meaningful employment, attain an adequate education, make better use of available income, obtain adequate housing, and achieve greater participation in community affairs. The CSBG Act requires OCS to reserve 1.5 percent of annual appropriations (about $10 million in fiscal year 2005) for training and technical assistance for state and local agencies; planning, evaluation, and performance measurement; assisting states with carrying out corrective action activities; and monitoring, reporting, and data collection activities. The fiscal year 2005 Consolidated Appropriations Act conference report directed OCS to develop a 3-year strategic plan to guide its training and technical assistance efforts. OCS has provided assistance to local agencies with problems primarily through two grant programs: Special State Technical Assistance (SSTA) Grants and the Peer-to-Peer Technical Assistance and Crisis Aversion Intervention (Peer-to-Peer) Grants. OCS generally awarded Special State Technical Assistance Grants to states or state associations of community action agencies to provide support to local agencies that have problems. Since 2001, OCS has awarded the Peer- to-Peer Grant solely to Mid-Iowa Community Action (MICA), a community action agency, to offer problem assessment, interim management, and other technical assistance services to local agencies with problems. In addition to the federal requirements in law, OCS, like other federal agencies, is required to adhere to internal control standards established by the Office of Management and Budget and GAO in order to help ensure efficient and effective operations, reliable financial reporting, and compliance with federal laws. Internal controls help government program managers achieve desired results through effective stewardship of public resources. Such interrelated controls comprise the plans, methods, and procedures used to meet missions, goals, and objectives and, in doing so, support performance-based management and should provide reasonable assurance that an organization achieves its objectives of (1) effective and efficient operations, (2) reliable reporting, and (3) compliance with applicable laws and regulations. The five components of internal controls are Control environment: creating a culture of accountability within an entire organization—program offices, financial services, and regional offices—by establishing a positive and supportive attitude toward the achievement of established program outcomes. Risk assessment: identifying and analyzing relevant risks, both internal and external, that might prevent the program from achieving objectives, and developing processes that can be used to form a basis for the measuring of actual or potential effects of relevant factors and manage their risks. During such a risk assessment process, managers should consider their reliance on other parties to perform critical program operations. Control activities: establishing and implementing oversight processes to address risk areas and help ensure that management’s directives— especially about how to mitigate and manage risks—are carried out and program objectives are met. Information and communication: using and sharing relevant, reliable, and timely operational and financial information to determine whether the agency is meeting its performance and accountability goals. Monitoring: tracking improvement initiatives over time and identifying additional actions needed to further improve program efficiency and effectiveness. The CSBG Act requires each state to designate a lead agency to administer CSBG funds and to provide oversight of local agencies that receive funds. States are required to award at least 90 percent of their federal block grant allotments to eligible local agencies, but are allowed to determine how CSBG funds are distributed among local agencies. States may use up to $55,000 or 5 percent of their CSBG allotment, whichever is higher, for administrative costs. States may use remaining funds for the provision of training and technical assistance, coordination and communication activities, payments to ensure they target funds to areas with the greatest need, support for innovative programs and activities conducted by local organizations, or other activities consistent with the purposes of the CSBG Act. In addition, state and local agencies that expend $500,000 or more ($300,000 or more prior to 2004) in total federal awards are required under the Single Audit Act to undergo an audit annually and submit a report to the Federal Audit Clearinghouse. Furthermore, individual federal funding sources may also be reviewed annually under the Single Audit, depending on the size of these expenditures. The CSBG Act requires states to monitor local agencies to determine whether they meet performance goals, administrative standards and financial management and other state requirements. States are required to perform this monitoring through a full on-site review of each local agency at least once during each 3-year period and to conduct follow-up reviews, including prompt return visits, to local agencies that fail to meet the goals, standards, and requirements established by the state. OMB has issued Single Audit compliance review guidance for CSBG that explicitly mentions that when auditors review state programs, they should determine whether states are visiting each local agency once every 3 years to assess if states comply with the law. States must also offer training and technical assistance to failing local agencies. Local agencies are required to submit a community action plan to states that contains a community needs assessment, a description of the service delivery system for services provided by or coordinated with CSBG funds, a description of how they will partner with other local agencies to address gaps in services they provide, a description of how funds will be coordinated with other public and private resources, and a description of how funds will be used to support innovative community and neighborhood-based initiatives. The CSBG Act requires both state and local agencies to participate in a performance measurement system. Results Oriented Management and Accountability (ROMA) is the OCS-sponsored performance management system that states and local agencies use to measure their performance in achieving their CSBG goals. State agencies report annually on ROMA using the CSBG Information System survey, which the National Association for State Community Services Programs (NASCSP) administers. In fiscal year 2004, the network of local CSBG agencies received almost $9.7 billion from all sources. About $7 billion of these funds came from federal sources, including about $600 million from CSBG. Other federal programs funding the CSBG network included Head Start, LIHEAP, CDBG, Child Care and Development Fund, Temporary Assistance for Needy Families, and the Social Services Block Grant (see fig. 1). HHS’s Administration for Children and Families contributed 90 percent of the $4.4 billion in funds provided to local agencies through HHS. HHS received about $637 million in CSBG funding for fiscal year 2005 and about $630 million for fiscal year 2006. In its efforts to oversee states, OCS did not fully comply with federal laws related to monitoring states and internal control standards and lacked a process to assess state CSBG management risks. OCS visited nine states in fiscal years 2003 through 2005. However, as mentioned in our letter to the Assistant Secretary for Children and Families, OCS lacked the policies, procedures, and other internal controls to ensure effective monitoring efforts. As a result, states and Congress are not receiving required information on monitoring findings, and states may not have made improvements to how they administer CSBG funds. We recommended that the OCS director establish formal written policies and procedures to improve OCS’s monitoring and related reporting, and OCS officials have made plans to address each of the recommendations included in the letter. We also found that OCS did not systematically use or collect available information that would allow it to assess states’ CSBG management risks. Officials told us that they considered a variety of risk-related factors when selecting sites for monitoring visits, including reports from state and local officials about financial management problems and staff turnover, but they did not have a systematic approach to assess risk or target monitoring toward states with the greatest needs. OCS lacked policies, procedures, and internal controls to help ensure effective on-site monitoring of state CSBG programs but has made plans to address these issues. OCS officials told us they visited nine states since 2003: Delaware, Louisiana, Maryland, and North Carolina in 2003; Alabama and Montana in 2004; and Kentucky, New Jersey, and Washington in 2005. During these visits, OCS officials told us they used a monitoring tool to assess the administrative and financial operations of state programs. However, OCS sent monitoring teams that lacked required financial expertise to conduct evaluations of states and did not issue final reports to states as required by law. Consequently, the visited states may have been unaware of potential OCS findings and, therefore, may not have developed corrective action plans if needed. Furthermore, OCS officials also told us that they lost documentation for the state visits conducted in fiscal years 2003 and 2004, leaving them unable to report to states they visited or perform appropriate follow-up procedures. OCS officials did not include information on their monitoring visits in their most recent CSBG report to Congress, released in December 2005, as statutorily required. In addition, OCS has not issued reports to Congress annually, as required by law. We reported on OCS’s monitoring challenges to the Assistant Secretary for Children and Families on February 7, 2006, and made recommendations for improving these conditions (for a copy of this letter, see app. II). Specifically, we recommended that the OCS director establish formal written policies and procedures to (1) ensure that teams conducting monitoring visits include staff with requisite skills, (2) ensure the timely completion of monitoring reports to states, (3) maintain and retain documentation of monitoring visits, and (4) ensure the timely issuance of annual reports to Congress. In response to this letter, OCS officials said that they plan to address each of our recommendations by hiring additional monitoring staff with expertise in financial oversight, training all staff on requirements that states must meet prior to visits, establishing a triennial monitoring schedule for visiting states, developing new guidelines for reporting to states and maintaining monitoring documents, and issuing timely reports to Congress, among other efforts. See appendix III for more details on HHS’s response to our letter. OCS did not systematically use or collect key information that would allow it to assess states’ CSBG management risks and target its limited monitoring resources toward states with the greatest risks. OCS officials told us that they used a risk-based approach to select states to visit, but we found the selection process to be ad hoc and often unexplained. OCS officials explained that they used information received from state and local officials on state CSBG management concerns to decide in which states to conduct compliance evaluation visits. For example, upon learning that local agencies in Louisiana were concerned that they had not received all the funds allotted to them, OCS decided to conduct an evaluation of that state. OCS officials also mentioned that when selecting states to visit, they considered such risk factors as staff turnover and having limited information about the state in general. However, OCS officials could not provide an explanation for why they visited six of the nine states that had undergone evaluations since 2003 and had no formal, written criteria for determining which states to visit. Each state provides annual program performance information to OCS, but OCS does not systematically use this information to assess states’ risks of not meeting program objectives. Specifically, states annually provide OCS with information about the number of people receiving services and the types of services local agencies provided and categorize this information according to designated program goals, which can provide OCS with data on whether state and local agencies are performing as expected. OCS also did not systematically use information on the amount of CSBG funds states have expended. OCS officials said they reviewed state Single Audit reports when CSBG was included, but we found state CSBG programs generally fell below thresholds to receive an annual required audit. OCS does not systematically collect other key information that would allow federal officials to assess risk related to states’ oversight efforts and therefore cannot determine whether states are fulfilling their requirement to visit local agencies. For example, although OCS required states to certify that they will conduct statutorily required on-site visits of local agencies in their CSBG applications, it did not require states to submit documentation, such as reports on their monitoring findings, to verify that they had conducted these visits. OCS officials told us that they relied on state Single Audit reports to learn which states did not comply with monitoring requirements. However, we found these audits rarely, if ever, review state CSBG programs. OCS officials told us that they were not aware of how rarely CSBG is reviewed through the Single Audit. OCS also does not systematically collect information on the local agencies that experience management problems or on the extent to which identified problems are being resolved. The federal CSBG director told us that as a result, OCS may not be fully aware of the extent to which states had local agencies facing challenges with managing CSBG. OCS is aware of some local agencies with problems but has not established regular methods for collecting this information. OCS officials told us that it is the states’ responsibility to identify and address problems in local agencies. In our review of Single Audit data, we found that financial management problems were common, with about 30 percent of local agencies reporting findings in 2002 and 2003. However, less than 10 percent of all local agencies reported more severe findings that could result in undetected financial reporting error and fraud, that are material weaknesses, in either year (see app. IV for Single Audit data). All five states we visited conducted on-site monitoring of local agencies with varying frequency and performed additional oversight efforts, such as reviewing financial and programmatic reports from local agencies. The state programs that we visited had different views on what they must do to meet federal requirements to monitor local agencies at least once during each 3-year period, and OCS had not issued guidance clarifying the time frames states should use when conducting on-site visits. Specifically, officials in two states conducted the on-site visits at least once between 2003 and 2005, but officials in the other three states visited their local agencies less frequently. While states varied in their frequency of monitoring visits, all five states offices visited local agencies with identified problems more often. Capacity to conduct on-site monitoring varied among the five state offices, particularly in the areas of administrative and financial monitoring resources. Officials in all five states that we visited reviewed local agency reports as an additional oversight effort and provided required training and technical assistance to local agencies. In addition, some state offices coordinated with other federal programs that fund local activities to gain further insight into local agencies’ management practices. The frequency of on-site visits to local agencies varied among the five states we visited, ranging from 1 to 5 years between site visits. State CSBG offices in Illinois and Texas conducted visits to each local agency between 2003 and 2005. Specifically, officials in these two states visited at least half of their agencies each year. In contrast, Pennsylvania, Missouri, and Washington officials monitored their local agencies less frequently, with Missouri allowing up to 5 years to pass between monitoring visits to some local agencies. Washington and Pennsylvania officials had visited nearly all of their local agencies from 2003 through 2005, leaving less than 10 percent unmonitored during this period. Conversely, the Missouri state CSBG office visited 4 of 19 local agencies from 2003 to 2005, leaving nearly 80 percent of agencies unmonitored since 2001 or 2002. While states varied in their frequency of monitoring visits, officials in all five states told us they visited local agencies with identified problems more often. Illinois, Texas, and Washington assessed local agencies’ management risks to prioritize which local agencies they visited more frequently during a monitoring cycle. Table 1 below shows how many local agencies these states monitored with on-site CSBG reviews from 2003 through 2005. Although the CSBG Act states that local agencies should be visited at least once during each 3-year period, the state officials we visited have different views on what is necessary to meet this requirement, and OCS has not issued guidance to states to clarify how the law should be interpreted. During the fiscal year 2004 Single Audit, Pennsylvania state auditors, using OMB guidance stating that reviews of local agencies must be conducted once every 3 years, found the state CSBG program to be out of compliance with federal requirements. However, the Missouri CSBG program manager stated that even though 15 local agencies have not been visited between 2003 and 2005, according to the state’s interpretation of the CSBG law, the CSBG office will meet monitoring requirements because all local agencies will be visited within the two 3-year periods of 2001 to 2003 and 2004 to 2006. For example, the Missouri officials visited five agencies in 2001, during the first 3-year period, and plan to visit these agencies again in 2006, during the second 3-year period. Administrative and financial monitoring resources varied in the five states we visited. Specifically, administrative funding ranged from less than 1 percent ($135,380) of CSBG funds in Missouri to 4 percent ($1.2 million) in Texas. The Missouri program manager told us that the state CSBG office used less than 1 percent for administration because state hiring restrictions prevented the CSBG program from hiring full-time CSBG staff. In addition, state officials in Missouri, Pennsylvania, and Washington told us that staff shortages prevented them from visiting local agencies more frequently. The number of staff available, funding for administration, and other related information are shown in table 2. State programs generally developed and made use of written monitoring guides, but they varied in their ability to assess local agencies’ financial operations. The five state programs we visited all had written guides for monitoring visits that covered such areas as financial controls, governance, personnel, performance outcomes, and previous monitoring findings. However, state auditors in Washington told us that the CSBG office could not provide evidence that the guides were consistently used during monitoring visits because available documentation showed that the guides were often incomplete after a visit. Illinois, Texas, and Washington offices regularly used accountants to support their reviews of local agencies’ financial operations as part of the on-site monitoring visits. Conversely, Missouri and Pennsylvania officials told us they did not regularly involve accountants in their monitoring efforts but had taken steps to improve the guides they used to review local agencies’ finances. Specifically, the Missouri CSBG office, in consultation with MICA, made changes to its monitoring guide and provided financial training to its staff. The state CSBG office in Pennsylvania, with input from state budget staff, revised the financial aspects of its monitoring guide. The states that we visited provided oversight in addition to on-site monitoring through such activities as reviewing reports, coordinating with other federal and state programs, and providing formal training and technical assistance. All five state programs collected regular financial and performance reports and reviewed local expenditure reports. In addition, officials in the five states told us that they reviewed reports of the annual Single Audits for local agencies when they included findings related to the CSBG program. For example, a state audit manager in Washington reviewed the audits and regularly notified the CSBG program office when local agency findings were identified, and state CSBG program staff followed up with local agency officials and worked to ensure that the findings were addressed. States also required all local agencies to submit performance data. State officials told us that local agencies established their own performance goals, and the state offices reviewed these goals and sometimes modified them in consultation with local agencies. Additionally, all state CSBG offices reviewed local community action plans. Illinois, Texas, and Washington officials used information from these additional oversight activities to conduct risk assessments and select local agencies for more frequent on-site monitoring visits. In conducting these risk assessments, the state programs considered such factors as the amount of funds received from the state, the time since the last monitoring visit, and any identified concerns about an agency’s competency, integrity, or proficiency. State officials told us that they directed local agencies to use preventive training and technical assistance to address any issues raised by risk assessments. Three of the five state CSBG offices that we visited also coordinated oversight activities with other federal and state programs that fund local agencies. For example, the Missouri, Texas, and Washington offices performed joint monitoring visits with state LIHEAP officials, and Missouri exchanged the results of local agency monitoring visits with the regional Head Start office. Coordination with other federal and state programs that provide funds to local agencies, such as housing-related programs and Head Start, generally consisted of occasional meetings and the sharing of some information. Also, OCS and the Head Start Bureau entered into a memorandum of understanding to foster collaboration and improve oversight of local agencies. While most regional and state officials told us they were aware of the memorandum of understanding, some told us that its intent was unclear and that they needed additional guidance to implement it more usefully. State associations of community action agencies played an important role in providing formal training and technical assistance to local agencies. CSBG officials in Illinois, Missouri, Pennsylvania, and Texas relied on state community action associations to provide technical assistance. For example, the Illinois Community Action Association received state training and technical assistance funds to provide on-line resources, peer coaching, and routine conferences on a regular basis. Missouri’s state association for community action agencies, the Missouri Association for Community Action, also received CSBG training and technical assistance funds, which it used to help local agencies improve communications and management information systems and provide additional technical assistance as needed. In addition the Missouri association provides networking opportunities for local agencies and has a full-time training expert on staff, supported by the state CSBG contract, to provide one-on-one support to local agencies. In addition to training provided by the association, the Texas CSBG staff sponsored conferences and workshops that allowed the staff members to provide training directly to local agencies. In Washington, the association and state staff sponsored discussion groups for the local agencies. Additionally, during on-site monitoring visits, state CSBG officials provided immediate informal technical assistance and follow-up with local agencies on monitoring findings when necessary. While OCS targeted some training and technical assistance funds to local grantees with financial and programmatic management problems, the information on results of this assistance is limited. In fiscal years 2002 through 2005, OCS designated between $666,000 and $1 million of its annual $10 million training and technical assistance funds to local agencies with problems, but OCS did not have information to determine whether its training and technical assistance programs and their funding amounts were appropriate for addressing the areas of greatest needs. Specifically, the federal CSBG director explained that OCS currently allocates training and technical assistance funds based on input from some state and local agencies, but this process was not guided by a systematic assessment of state and local needs. Information on the results of OCS’s current grant programs that target local agencies with problems was limited. However, information provided by progress reports for these grants showed that some of the agencies assisted had improved. In fiscal years 2002 through 2005, OCS designated $1 million or less of its annual $10 million training and technical assistance funds to assist local agencies with problems, but it had no way to determine whether this money was allocated in a way that addressed the greatest needs of state and local agencies. OCS divided its annual $10 million training and technical assistance funds among program support, contracts, and grants. The Deputy Director of OCS told us that program support funds paid salaries and expenses for OCS officials that manage CSBG grants, and contract funds paid for costs associated with logistics such as outreach and meeting with grantees, costs related to a management information system, and costs related to grant competitions. Training and technical assistance grants may be used for a variety of purposes, and OCS allocated these funds to support different types of activities each year. For example, OCS frequently funded activities such as supporting the implementation of ROMA, encouraging agencies to share innovative ideas, and providing program and management training opportunities for community action professionals. OCS designated between $666,000 and $1 million of annual training and technical assistance grants to assist local agencies with problems through two grant programs: Special State Technical Assistance Grants and the Peer-to-Peer Technical Assistance and Crisis Aversion Intervention Grants. These grants were commonly used to address to management, financial, and board governance problems at local agencies. Table 3 shows the allocation of CSBG funding for grants, contracts, and program support. Despite a congressional recommendation, OCS officials told us there is no process in place to strategically allocate its approximately $10 million in training and technical assistance funds among program areas. OCS drafted a strategic plan for allocating its training and technical assistance funds— an action directed by congressional conferees in the fiscal year 2005 Consolidated Appropriations Act conference report—but did not implement the plan. The federal CSBG director told us that OCS did not implement the strategic plan because the President’s recent budget proposals did not include funding for CSBG, although Congress has continued to provide funding for the program. The federal CSBG director also told us that the draft plan focused resources on such areas as financial integrity and management, leadership enhancement, and data collection. The federal CSBG director also told us that OCS currently allocates training and technical assistance funds based on input from some state and local agencies, but this process was not guided by a systematic assessment of state and local needs and did not involve guidance or specifications on the actual amounts that should be awarded for activities. Specifically, OCS sought input each year from the Monitoring and Assessment Taskforce—a group made up of some state and local CSBG officials and national CSBG associations such as the National Association of State Community Service Programs (NASCSP)—to generate a list of priority activities. OCS then presented this list at national community action conferences, such as those sponsored by NASCSP or the Community Action Partnership, for additional comments. However, OCS does not track which local agencies experienced problems and what those problems were. As a result, OCS could not provide us with information on the extent to which these current efforts are addressing those needs. Information on the results of OCS grant programs that targeted local agencies with problems was limited. However, the available information showed that some local agencies have improved financial and programmatic management as a result of the assistance they received. Our review of all available grant applications and subsequent progress reports for SSTA and Peer-to-Peer grants identified 68 local agencies that these grants targeted for assistance between 2002 and 2005. Of these 68 agencies, 22 had no results available because the assistance was ongoing and therefore final progress reports were not yet due. We identified outcomes for 25 of the remaining 46 agencies, as shown in figure 2. Of these 25 agencies, 18 reported improvement, and the remaining 7 agencies had unresolved issues, had closed, or had undeterminable results. Results were unknown for the other 21 agencies because their grant progress reports did not include information on outcomes. OCS officials told us that they hold grantees accountable for conducting activities under the proposed scope of training and technical assistance grants, not whether these activities result in successful outcomes for the local agencies they assist. OCS’s guidance to training and technical assistance grantees recommends that the grantees report whether activities are completed but does not include a requirement to report on outcomes. Further, HHS’s guidance on discretionary grant reporting, which covers CSBG training and technical assistance grants, does not specify what information program offices should collect on performance and outcomes. We also spoke with officials in HHS’s Office of Inspector General who mentioned that on the basis of prior reviews the office had some concerns about the administration of CSBG discretionary grants. Specifically, these officials had concerns about the completeness and accuracy of progress reports and whether grantees were meeting their goals. Officials involved in efforts to use grants to assist agencies gave mixed reviews on the effectiveness of activities funded by these grants. State and local officials in Texas and Missouri spoke highly of their interaction with MICA to assist agencies with problems. State officials in Texas said they had used an SSTA grant to assist two local agencies and had hired MICA as the contractor to provide the assistance. Texas officials were pleased with the assistance that MICA provided and said that the state did not have the resources to provide the kind of long-term, on-site assistance that MICA offered. Missouri officials told us that all five local agencies that the state had contracted with MICA to work with had benefited from MICA’s expertise, particularly with regard to financial matters. Like Texas, the Missouri office also used SSTA grants to provide assistance to four of these agencies. In contrast, we also spoke to national, regional, and state community action association officials who said they had worked with local agencies that received assistance from MICA and had concerns about MICA’s work. Specifically, they told us that MICA was not always effective in resolving local agencies’ problems, did not use money efficiently, and had an apparent conflict of interest stemming from its practice of conducting agency assessments and offering services to correct problems those assessments identify. For example, an Ohio official who managed a local agency’s contract told us that even with 6 months of paid assistance from MICA, the local agency had closed. In response to these criticisms, a MICA official said that some problems at local agencies were too severe for them to address and that MICA tries to be transparent about its costs by issuing a detailed proposal before starting work with an agency. Additionally, the MICA official said that MICA and OCS officials had discussed the conflict of interest issue, and OCS had encouraged MICA to continue efforts to assess agencies and address their problems. Under the CSBG program, federal, state, and local agencies work together to help low-income people achieve self-sufficiency. The federal government’s role is to oversee states’ efforts to ensure that local agencies properly and effectively use CSBG funds. OCS currently lacks the procedures, information, and guidance to grantees that it needs to effectively carry out its role. Specifically, OCS does not fully use the data it collects and does not collect other key information on state oversight efforts and the outcomes of training and technical assistance grants that could enhance its oversight capabilities. Additionally, OCS has not issued guidance for how often states should visit local agencies. Thus, OCS cannot determine where program risks exist or effectively target its limited resources to where they would be most useful. Consequently, OCS may have missed opportunities to monitor states facing the greatest oversight challenges and to identify common problem areas where it could target training and technical assistance. In order to provide better oversight of state agencies, we recommend that the Assistant Secretary for Children and Families direct OCS to take the following actions: Conduct a risk-based assessment of state CSBG programs by systematically collecting and using information. This information may include programmatic and performance data, state and local Single Audit findings, information on state monitoring efforts and local agencies with problems, and monitoring results from other related federal programs that may be obtained by effectively using the memorandum of understanding with the Head Start program and other collaborative efforts. Establish policies and procedures to help ensure that its on-site monitoring is focused on states with highest risk. Issue guidance on state responsibilities with regard to complying with the requirement to monitor local agencies at least once during each 3- year time period. Establish reporting guidance for training and technical grants that would allow OCS to obtain information on the outcomes of grant- funded activities for local agencies. Implement a strategic plan that will focus its training and technical assistance efforts on the areas in which states face the greatest needs. OCS should make use of risk assessments and its reviews of past training and technical assistance efforts to inform the strategic plan. We provided a draft of this report to the Department of Health and Human Services and received written comments from the agency. In its comments, HHS officials agreed with our recommendations and, in response, have planned several changes to improve CSBG oversight. Specifically, HHS officials stated that OCS is finalizing a risk-based strategy to identify state and local agencies most in need of oversight and technical assistance based on characteristics identified in state plans, audit reports, previous monitoring and performance reports, and reports from other programs administered by local agencies that receive CSBG funds. HHS officials said that this strategy will result in OCS implementing a triennial monitoring schedule they plan to have fully operational by fiscal year 2008. HHS officials also said that by October 1, 2006, OCS will issue guidance to state CSBG lead agencies to clarify the states’ statutory obligation to monitor all local entities receiving CSBG funding within a 3- year period, as well as requirements for states to execute their monitoring programs. Additionally, HHS officials said that OCS has worked with a group of local and state CSBG officials and national CSBG associations to develop a comprehensive training and technical assistance strategic plan focused on issues such as leadership, administration, fiscal controls, and data collection and reporting. See appendix V for HHS’s comments. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Assistant Secretary for Children and Families, relevant congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7215 if you or your staff have any questions about this report. Other major contributors to this report are listed in appendix VI. To gain a better understanding of oversight efforts undertaken by state and federal program offices to monitor the Community Services Block Grant (CSBG) program and ensure the accountability of funds, we examined (1) the extent to which the Department of Health and Human (HHS) oversight of states efforts to monitor local agencies complied with federal laws and standards, (2) the efforts selected states have made to monitor local agencies’ compliance with fiscal requirements and performance standards, and (3) the extent to which HHS targeted federal CSBG training and technical assistance funds to efforts to assist local agencies with financial or management problems and what is known about the results of the assistance. To address the first objective, we reviewed federal laws and standards to obtain information on the Office of Community Services’s (OCS) requirements and responsibilities for the oversight of states and interviewed federal officials about their oversight efforts. In addition, we obtained and reviewed available information on OCS monitoring policies and procedures; documentation of federal monitoring visits of states conducted during fiscal years 2003 through 2005; other information OCS collects from states, including state applications and performance data; and guidance issued by OCS to communicate program-related information, concerns, and priorities to grantees to assess OCS’s compliance with laws and standards. We also reviewed available Single Audit data for local agencies and grouped them by state to assess the percentage of local agencies with Single Audit findings at both the national and state levels reported in fiscal years 2002 and 2003, the most recent years for which information was available. The scope of this review included the District of Columbia and Puerto Rico as well as the 50 states. We assessed the reliability of Single Audit data by performing electronic and manual data testing to assess whether the data were complete and accurate. We also assessed the reliability of CSBG statistical data by interviewing officials knowledgeable about data collection and maintenance. We determined that the data were sufficiently reliable for the purposes of this report. In addition, we interviewed federal officials with Head Start, the Low-Income Home Energy Assistance Program, and the Community Development Block Grant program, which also distribute funds to local agencies, to learn whether officials from these programs shared information with CSBG officials to support oversight efforts. To address the second objective, we reviewed federal laws and standards to obtain information on states’ CSBG oversight responsibilities and conducted site visits. We visited five states, Illinois, Missouri, Pennsylvania, Texas, and Washington, that were selected using several criteria including grant amounts, number of local agencies, state administrative structure, and analysis of Single Audit results among local agencies. CSBG association officials recommended some of these states based on promising efforts to monitor local agencies. Table 4 provides characteristics we considered for each state. During our state site visits, we interviewed and collected information from state and local officials in Illinois, Missouri, Pennsylvania, Texas, and Washington about state oversight efforts from fiscal year 2003 through fiscal year 2005. Specifically, we interviewed state program officials and reviewed related documentation including state guidance and directives to local agencies, application instructions, state on-site monitoring schedules, on-site monitoring guides, sample contracts, and reporting forms for local agencies. We also visited three local agencies in each state and interviewed staff to learn more about state oversight and monitoring efforts, including application processes, fiscal and performance reporting, on-site monitoring, and training and technical assistance. In each state we visited, we reviewed program files for six local agencies, including files for the three we visited and three others, that included community action plans and applications, financial and performance reports, and state monitoring reports and follow-up correspondence. In addition, we obtained information on state audit findings related to CSBG and met with state auditors during site visits to learn more about additional state oversight of CSBG and related programs and local agencies. We also interviewed state officials in the Low-Income Home Energy Assistance Program and the Community Development Block Grant programs, as well as regional HHS officials, to learn whether any coordination occurred between the programs to support state oversight efforts. Our results on the five states that we visited are not generalizable to all state CSBG programs. To address the third objective, we interviewed federal officials and contractors that provide training and technical assistance to obtain information on the extent to which OCS grants were targeted to assist agencies with problems and how they determined whether these efforts were effective. We obtained and reviewed training and technical assistance grant applications and progress reports for Special State Technical Assistance (SSTA) Grants and Peer-to-Peer Technical Assistance and Crisis Intervention (Peer-to-Peer) Grants for fiscal year 2002 through fiscal year 2005 to assess efforts to assist local agencies with problems and the results of these efforts. This review included applications for all 39 SSTA Grants awarded during this period, progress reports issued in 6-month intervals for the Peer-to-Peer Grant, and available final progress reports for the SSTA Grants. We were not able to obtain some SSTA Grant progress reports because the assistance was still ongoing, particularly for grants issued recently. We also interviewed a national association representative and state and local officials to learn about the results of training and technical assistance efforts. We conducted our work from July 2005 through May 2006 in accordance with generally accepted government auditing standards. Tables 5 and 6 present Single Audit data by state for local CSBG agencies (i.e., community action agencies) for 2002 and 2003, respectively. For each state, we report (1) the number of local agencies for which Single Audit data were available, (2) the percentage of local agencies in the state that had any type of Single Audit finding, (3) the percentage of local agencies that had material weakness findings, and (4) the percentage of local agencies that had material noncompliance findings. States are ranked in decreasing order by the percentage of local agencies in the state that had any type of Single Audit finding. In addition to the contact named above, Bryon Gordon (Assistant Director), Danielle Giese (Analyst-in-Charge), Janice Ceperich, Tim Hall, and Andrew Huddleston made significant contributions to this report. Curtis Groves, Matt Michaels, and Luann Moy provided assistance with research methodology and data analysis. Jim Rebbe provided legal counsel, and Jonathan McMurray and Lise Levie assisted with report development. Community Services Block Grant Program: HHS Needs to Improve Monitoring of State Grantees. GAO-06-373R. Washington, D.C.: February 7, 2006. Head Start: Comprehensive Approach to Identifying and Addressing Risks Could Help Prevent Grantee Financial Management Weaknesses. GAO-05-465T. Washington, D.C.: April 5, 2005. Head Start: Comprehensive Approach to Identifying and Addressing Risks Could Help Prevent Grantee Financial Management Weaknesses. GAO-05-176. Washington, D.C: February 28, 2005. Internal Control Management and Evaluation Tool. GAO-01-1008G. Washington, D.C.: August 2001. Standards for Internal Control in the Federal Government. GAO/AIMD-00-21.3.1. Washington: D.C.: November 1999. Grant Programs: Design Features Shape Flexibility, Accountability, and Performance Information. GAO/GGD-98-137. Washington, D.C.: June 22, 1998.
The Community Services Block Grant (CSBG) provided over $600 million to states in fiscal year 2005 to support over 1,000 local antipoverty agencies. The Department of Health and Human Services's (HHS) Office of Community Services (OCS) is primarily responsible for overseeing this grant; states have oversight responsibility for local agencies. At the request of Congress, GAO is providing information on (1) HHS's compliance with federal laws and standards in overseeing states, (2) five states' efforts to monitor local agencies, and (3) federal CSBG training and technical assistance funds targeted to local agencies with problems and the results of the assistance. States were selected based on varying numbers of local agencies and grant amounts and recommendations from associations, among other criteria. In a February 2006 letter (GAO-06-373R), GAO notified OCS that it lacked effective policies, procedures, and controls to help ensure that it fully met legal requirements for monitoring states and internal control standards. At that time, GAO also offered recommendations for improvements. OCS has responded that it intends to take actions to address each of those recommendations. In addition, GAO found that OCS did not routinely collect key information, such as results of state monitoring reports, or systematically use available information, such as state performance data, to assess the states' CSBG management risks and target monitoring efforts to states with the highest risk. All five states we visited conducted on-site monitoring of local agencies with varying frequency and performed additional oversight efforts. Two state offices visited each local agency at least once between 2003 and 2005, while the other three states visited local agencies less frequently. State officials we visited had different views on what they must do to meet the statutory requirement to visit local agencies at least once during each 3-year period, and OCS has not issued guidance interpreting this requirement. Officials in all five states also provided oversight in addition to monitoring through such activities as reviewing reports and coordinating with other federal and state programs. OCS targeted some training and technical assistance funds to local grantees with financial or management problems, but information on the results of this assistance is limited. In fiscal years 2002 through 2005, OCS designated between $666,000 and $1 million of its annual $10 million training and technical assistance funds to local agencies with problems, but had no process for strategically allocating these funds to areas of greatest need. In addition, the final reports on awarded grants indicated that some local agencies had improved, but the reports provided no information on the outcomes of assistance for nearly half of the 46 local agencies that GAO identified as being served.
The electricity industry includes four distinct functions: generation, transmission, distribution, and system operations. Once electricity is generated—whether by burning fossil fuels; through nuclear fission; or by harnessing wind, solar, geothermal, or hydro energy—it is sent through high-voltage, high-capacity transmission lines to areas where it will be used. Once there, electricity is transformed to a lower voltage and sent through local distribution wires for end use by industrial plants, businesses, and residential customers. Because electric energy is generated and consumed almost instantaneously, the operation of an electric power system requires that a system operator constantly balance the generation and consumption of power. Historically, the electric industry developed as a loosely connected structure of individual monopoly utility companies, each building and operating power plants and transmission and distribution lines to serve the exclusive needs of all the consumers in its local area. Because these companies were monopolies, they were overseen by regulators who balanced different stakeholder interests in order to protect consumers from unfair pricing and other undesirable behavior. Retail electricity prices were regulated by the states, generally through state public utility commissions. States retained regulatory authority over retail sales of electricity, construction of transmission lines within their boundaries, and intrastate distribution. Generally, states set retail rates based on the utility’s cost of production plus a fair rate of return. States also approved plans and spending for building new power plants to serve regulated customers. In contrast, wholesale electricity pricing and interstate transmission were regulated by the federal government, principally FERC. Under law, FERC has the obligation to ensure that the rates it oversees are “just and reasonable” and not “unduly discriminatory or preferential.” To meet this responsibility, FERC approved rates for transmission and wholesale sales of electricity in interstate commerce based on the utilities’ costs of production plus a fair rate of return on the utilities’ investment. Since the early 1990s, the federal government has taken a series of steps to restructure the wholesale electricity industry, generally focused on increasing competition in wholesale markets. Federal restructuring efforts have (1) changed how electricity prices are determined, replacing cost- based regulated rates with market-based pricing in many wholesale electricity markets, and (2) allowed new companies to enter electricity markets. Some of these efforts have focused on allowing nontraditional utilities to buy and sell electricity in wholesale markets, while others have focused on allowing nontraditional utilities to build new power plants and sell electricity to utilities and others. To facilitate formation of these markets and these companies’ efforts to buy and sell electricity, FERC initially required that transmission owners under its jurisdiction, generally large utilities, allow all other entities to use their transmission lines under the same prices, terms, and conditions as those they apply to themselves. To do this, FERC required the regulated monopoly utilities—which had historically owned the power plants, transmission systems, and distribution lines—to separate their generation and transmission functions, and encouraged these companies to form independent entities, called Independent System Operators (ISO), to manage the transmission network. In recognition that these initial efforts were not sufficient, FERC issued Order 2000 in December 1999 to encourage owners of transmission systems to develop more robust organizations, called RTOs, to manage the transmission networks and perform other functions that FERC believed were important. FERC believed RTOs were needed to address impediments to competitive wholesale markets: growing stresses on the transmission grid and remaining discrimination in the provision of transmission service— transmission owners operating their grids in a way that favored their own interests over those of their competitors. FERC Order 2000 encouraged, but did not mandate, that transmission owners join RTOs and allowed companies engaged in purchase and sale of electricity in markets to continue to own power plants, retail utilities, distribution lines, transmission lines, and other assets regulated by FERC or the states. FERC outlined minimum characteristics that RTOs were to have: independence from control by any market participant, sufficient scope to maintain reliability and support nondiscriminatory power markets, operational authority for transmission facilities under their control, and exclusive authority for maintaining the short-term reliability of the grid they operate. Appendix II describes these characteristics in more detail. In Order 2000, FERC opined that RTOs would achieve the following benefits: eliminate multiple charges incurred when crossing transmission systems owned by different utilities, improve management of electricity congestion––bottlenecks resulting from insufficient transmission capacity to accommodate all requests to transport power and maintain adequate safety margins for reliability, provide more accurate estimates of transmission system capacity—the amount of electric power the transmission system can manage, increase efficiency in planning for transmission and generation investments; improve grid reliability, and reduce opportunities for discriminatory transmission practices. FERC expected the formation of RTOs to result in significant cost reductions, additional efficiencies, and better wholesale market performance, ultimately lowering prices for electricity consumers. Specifically, it estimated RTOs would bring at least $2.4 billion in annual benefits to the industry. Because of their independence, FERC expected RTOs to lead to lighter regulation by reducing the need for resolving stakeholder disputes through the FERC complaint process and allowing FERC to provide additional latitude to RTOs in their transmission pricing proposals, among other things. FERC’s efforts to encourage the formation of RTOs have been relatively successful and RTOs now serve many parts of the country and extend into Canada, as figure 1 shows. FERC oversees six RTOs: California ISO, ISO New England, Midwest ISO, PJM, New York ISO, and Southwest Power Pool. The Electric Reliability Council of Texas is primarily regulated by the Public Utility Commission of Texas. RTOs operate, but do not own, electricity transmission lines and are responsible for ensuring nondiscriminatory access to these lines for all market participants. As shown in table 1, the six RTOs under FERC’s jurisdiction, in general, are responsible for managing transmission in their regions—by implementing the rules and transmission pricing outlined in their tariffs and performing reliability planning by considering factors such as weather conditions and equipment outages that could affect electricity supply and demand—as well as operating wholesale markets for electricity and other services. Decisions an RTO makes when carrying out these responsibilities can influence the wholesale price of electricity and ultimately the price consumers pay. A number of other factors outside an RTO’s control, such as regulator decisions about what transmission and distribution rates to approve and whether to implement price caps, also influence the prices consumers pay for electricity. Prices are also highly dependent on the cost of fuel used to generate electricity. Typically, consumer electricity prices are composed of three broad components: (1) distribution, which, for four states GAO contacted, accounts for about 15 to 30 percent of the final price of electricity; (2) transmission, which accounts for about 5 to 10 percent of the final price; and (3) electricity generation or production, which accounts for about 55 to 65 percent of the final price. In RTO regions, distribution rates continue to be set by state regulators, and transmission rates continue to be set by state and federal regulators. FERC also approves RTO procedures for planning transmission infrastructure, as well as the recovery of transmission expenses. The electricity generation component was previously set by regulators based on the cost of providing electricity plus a rate of return. The price of this component is now determined in RTO-administered markets—regulated by FERC to ensure they are competitive—to the extent that entities choose to buy electricity in these markets. Some RTOs also administer markets that determine the price of other services needed to maintain reliability, such as capacity and ancillary services, in lieu of charging a cost-based rate. The generation portion of consumers’ bills may also include administratively determined payments made to generators to maintain reliability—reliability payments, as well as a FERC-approved rate to recover RTO expenses. The size of these components varies from region to region. In New England, for example, on average approximately 47 percent of a typical consumer’s bill in 2006 was for electricity, capacity, and ancillary services, the prices of which are determined through the markets this RTO administers. A very small portion of a typical consumers’ bill, less than 1 percent, was from ISO New England’s rate to recover operational and investment expenses. Figure 2 provides more information. Because RTOs charge for the use of transmission lines, for certain wholesale sales of electricity, and to recover their own expenses, they are subject to FERC oversight and regulation. In general, FERC regulates RTOs as it does other utilities. FERC’s basic rate authority stems from Sections 205 and 206 of the Federal Power Act of 1935 and is to ensure that wholesale electricity rates are just and reasonable and not unduly discriminatory or preferential. Under Section 205, FERC generally has the authority to review and approve expenses and, if applicable, a reasonable rate of return on investment used to serve customers. For RTOs, which are nonprofit entities, rates are generally based on proposed annual expenses and are periodically adjusted based on the actual expenses incurred by the RTOs. RTOs must also seek FERC approval for decisions to implement initiatives such as new markets and changes to existing markets and market rules, among other things. Section 206 authority provides for FERC review of rates already in effect. FERC may initiate Section 206 proceedings if it deems an investigation is needed or in response to a complaint filed by an outside party. FERC has authority to determine if these rates are just and reasonable, set new rates, and may, in some cases, order refunds. Under Section 205 or Section 206, RTOs or other parties, respectively, file evidence with FERC to support their proposed rates or rate changes. Others can file comments and present any contrary evidence under either provision. FERC conducts hearings, which may include proceedings before an administrative law judge, and makes final decisions. Parties may file appeals, first with FERC and later in federal court. From 2002 to 2006, RTO expenses totaled $4.8 billion when adjusted for inflation and varied considerably depending on the size of the RTO and functions it carried out. In general, RTOs with greater electricity transmission volume benefit from economies of scale by spreading their expenses over more units of electricity volume, thereby reducing their expenses per MWh. On a per MWh basis, RTO inflation-adjusted expenses have varied from 2002 to 2006, with ISO New England, Midwest ISO, and New York ISO expenses rising and California ISO, PJM, and Southwest Power Pool expenses decreasing. The expenses per MWh we calculated for PJM for 2002 and 2003 are significantly higher than the amounts it billed its market participants, because we did not retroactively apply financial statement reclassifications to data from prior years. Form No. 1 filings for 2006 made by the RTOs to FERC provide better visibility of transmission and market expenses than prior years’ reports did. In 2006, about 17 percent of all RTO expenses were for transmission services, 13 percent were for market expenses, 39 percent were for administrative and general expenses, and 31 percent consisted of other expenses. RTOs also made major investments in property, plant, and equipment—$1.6 billion when adjusted for inflation as of December 2006. From 2002 to 2006, total inflation-adjusted expenses reported in RTO financial statements totaled $4.8 billion, ranging from $227 million for Southwest Power Pool, a smaller RTO in terms of 2006 transmission volume and the number of functions it performs, to $1.4 billion for PJM, an RTO with many diverse functions and the largest 2006 transmission volume. As shown in figure 3, the largest category of expenses for RTOs over this time period was salaries and benefits, accounting for about $1.6 billion, or 33 percent of RTOs’ expenses from 2002 to 2006. According to RTO officials, due to the highly technical and sophisticated nature of the functions RTOs must carry out, RTOs require highly trained staff, such as power system engineers, economists, and software engineers. In 2006, all RTOs combined employed 2,737 full-time equivalents (FTE) with an average salary and related benefits of approximately $134,000. Appendix III shows the inflation-adjusted expenses, number of full-time equivalents, and average salary and expenses per full-time equivalent for each RTO from 2002 to 2006. Our analysis reflects total annual expenses as reported in the RTOs’ audited financial statements. We did not retroactively apply financial statement reclassifications to data from prior years. Because PJM made retroactive reclassifications that affected its 2002 and 2003 financial statements, in 2002 and 2003, the expenses we calculated for PJM are significantly higher than the amounts it billed its market participants. In general, RTOs with greater electricity transmission volume benefit from economies of scale––spreading their expenses over more units of electricity volume, thus lowering the amount of RTO-related expenses per MWh. For example, PJM had the highest total inflation-adjusted expenses among RTOs in 2006—$282 million—but had the second lowest expense per MWh—$0.39 per MWh—because it transmitted a greater amount of electricity than the other RTOs. In contrast, ISO New England had the second lowest expenses in 2006—$118 million—but had the highest expense per MWh—$0.89 per MWh—because it transmitted less electricity. Figure 4 illustrates total RTO expenses in 2006 per unit of electricity transmitted by major category. Appendix IV provides transmission data and expense per MWh data by RTO from 2002 to 2006. Our analysis reflects total annual expenses as reported in the RTOs’ annual audited financial statements, divided by the amount of transmission volume within the RTO. These calculations may result in MWh expenses that differ from what RTOs charge their market participants. Furthermore, we did not retroactively apply financial statement reclassifications to data from prior years. Because PJM made retroactive reclassifications that affected its 2002 and 2003 financial statements, in 2002 and 2003, the expenses per MWh we calculated for PJM are significantly higher than the amount it billed its market participants. For example, in 2002, PJM had expenses of $0.95 per MWh, according to our analysis. According to data provided by PJM officials that we adjusted for inflation, market participants were billed $0.51 per MWh, after refunds and other billing adjustments were taken into account. Similarly, in 2003, PJM had expenses of $0.85 per MWh according to our analysis, but market participants were billed $0.57 per MWh when adjusted for inflation. In addition, RTOs utilize differing billing methodologies. As a result, the rates they charge to market participants may be different than the total expenses per MWh calculated in our analysis. Table 2 shows actual electricity rates per MWh charged to RTO market participants, adjusted for inflation, from 2002 to 2006. When looked at annually, inflation-adjusted RTO expenses from 2002 to 2006 have varied, reflecting new initiatives implemented by the RTOs and other changes made by management. Figure 5 illustrates changes in RTO inflation- adjusted expenses per unit of electricity transmitted over this period. Several key trends occurred over this period, with the expenses per MWh of three RTOs—Midwest ISO, New York ISO, and ISO New England— rising as they implemented major market and other initiatives. For example, during this period, Midwest ISO expanded its role from coordinating reliability, administering its tariff, and performing transmission system planning to include operating markets for energy and other services. As a result, Midwest ISO’s expenses rose in a number of areas. Salaries and benefits increased as the RTO increased its full-time equivalents from 265 in 2002 to 643 in 2006, in part, to carry out the RTO’s expanded operations. Expenses for consulting, professional, and outside services—used, in part, to develop the new markets for electricity and other services—and depreciation and amortization expenses—to recover the costs of major investments, such as information systems and infrastructure related to the electricity market—also increased from 2002 to 2006. Increases in Midwest ISO’s expenses were mitigated by its rising transmission load as it took on additional members. In contrast, California ISO’s expenses per MWh hour declined significantly over this time period, particularly in the areas of depreciation and amortization and facilities and maintenance. California ISO officials attributed declining expenses to an organizational focus on keeping expenses low, including a specific cost containment management initiative in 2005, and more economically advantageous contracts in a few key areas. Additionally, as noted in the graphic, PJM changed the way it reported revenues and expenses. Starting in 2004, PJM offset revenues and expenses related to study and interconnection fees. Had 2002 and 2003 expenses been reported as they were in later years, PJM’s inflation- adjusted expenses per MWh would have fluctuated over the period and ultimately declined from $0.52 per MWh in 2002 to $0.39 per MWh in 2006. Finally, Southwest Power Pool’s expenses per MWh declined slightly over this time period—from $0.47 per MWh to $0.37 per MWh, as increasing overall expenses were mitigated by rising transmission load. Starting in 2006, FERC required RTOs and other utilities to provide more detailed information about market and transmission expenses on their Form No. 1 filings to improve the visibility and uniformity of RTO and utility financial reporting, and we found that RTO’s 2006 Form No. 1s are more transparent than in previous years. FERC officials told us these changes would facilitate review by FERC and the public of RTO expenses and rates. Form No. 1 filings categorize expenses according to two key functions RTOs perform—transmission coordination and market operation—as well as other categories such as administrative and general expenses. In 2006, about 17 percent of all RTO inflation-adjusted expenses were for transmission services, 13 percent were for market expenses, 39 percent were for administrative and general expenses, and 31 percent consisted of other expenses. Figure 6 provides information reported in the Form No. 1 about each of the RTOs’ expenses. Appendix V shows 2006 RTO inflation-adjusted expenses as reported on the FERC Form No. 1. Transmission expenses cover the cost of providing reliability services and monitoring and operating the transmission systems, among other things. Market expenses include the cost of administering markets for electricity and other services, monitoring markets for competitiveness, and related computer software and hardware maintenance, among other things. Administrative and general expenses consist of employee salaries and benefits, rent, and outside services, among other things. The six RTOs whose financial statements we reviewed have made investments in property, plant, and equipment. Total inflation-adjusted investment for all RTOs was $1.6 billion as of December 31, 2006, without adjusting for accumulated depreciation. Software and equipment was the largest category of investment at each of the RTOs, as shown in figure 7, and was used by the RTOs to provide various transmission and market services across regions. For example, in 2005, ISO New England began construction of a replacement control center equipped with computer hardware and software to deploy generators, forecast electricity requirements, ensure load is not interrupted in the event of a contingency, and conduct and monitor electricity transfers with other RTOs. Appendix VI shows RTOs’ investments in property, plant, and equipment as of December 2006. RTOs consider stakeholder comments when reviewing RTO expenses and other decisions that may affect electricity prices. In the two RTOs we visited, stakeholders said they valued the opportunity for discussion with the RTOs, but some stakeholders expressed concern that attending meetings was resource intensive and that too little emphasis was placed on how decisions might affect the prices consumers pay for electricity. Furthermore, though RTO budgets offer one tool FERC could use to revisit whether rates remain just and reasonable between rate proceedings, the extent to which FERC reviews proposed expense information in RTO budgets varies. Additionally, although FERC annually requires RTOs to report the actual expenses they incurred, FERC staff have not regularly reviewed or audited these submissions for accuracy and do not look at them for reasonableness. Instead, FERC relies heavily on stakeholders to raise concerns over proposed expenses and other decisions that may affect consumer electricity prices. According to senior RTO officials, RTO boards and staff give much consideration to stakeholder comments when reviewing RTO expenses and making decisions that affect electricity prices. They told us that while RTO decisions are independent—stakeholder input is generally advisory— stakeholders play an important role in evaluating RTOs’ operations and plans. In particular, although RTOs conduct internal reviews of their proposed expenses, establish controls for reviewing the prudence of expenses, and may perform formal cost-benefit analysis on major initiatives, officials told us stakeholder comments are one of the most important factors when reviewing expenses and making decisions. In general, RTOs solicit comments from stakeholders about their opinions on decisions to modify new market rules, changes to governing documents, and budgets and expenses, among other things. According to RTO officials, in some instances, RTOs are required to secure affirmative stakeholder votes on these decisions prior to proceeding. Specific issues for discussion may be raised by the RTOs, stakeholders, or in response to FERC orders or directives. Stakeholders generally provide input to the RTO boards of directors in three ways––written communications, oral discussions, and votes––although each RTO has a unique process for soliciting this input, as shown in table 3. RTO officials told us that these processes were developed after extensive negotiations with stakeholders when each RTO was formed. To ensure stakeholder input reflects a range of interests, five of the six RTOs we reviewed group stakeholders with common interests, such as electric distribution companies, transmission owners, and end users. All six of the RTOs we reviewed involve state regulators in their decision-making process, either formally as a unique stakeholder group or informally as participants who attend stakeholder meetings. Though state regulators are not prohibited from voting in stakeholder meetings, most have chosen to participate formally in the process but not vote. Additionally, in several RTO areas, state regulators have formed organizations to collectively represent their interests and advise the RTO. For instance, state regulators in the Midwest ISO formed the Organization of MISO States to discuss what decisions the RTO should make and participate in stakeholder meetings. In general, stakeholders participate in the RTO decision-making process through a primary committee that reports to the board of directors and a range of lower-level committees and working groups that report to the primary committee. Lower-level committees and working groups tend to focus on narrow subjects or specific initiatives such as development of specific markets or proposed changes to existing rules, and lower-level committees often involve stakeholders with expertise in the specific subject matter. The primary committee and lower-level committees and working groups hold regular or episodic meetings that stakeholders participate in. These meetings are open to participation by any stakeholder with an interest in attending. As shown above, stakeholders representing many perspectives, from generators to groups representing consumers, participate. Because of the numerous, simultaneous matters under consideration, there can be many meetings potentially relevant to stakeholders. Subjects discussed and analyzed in lower-level committee and working group meetings are eventually raised for discussion at the primary committee meeting, where a vote is taken about whether to recommend a decision be pursued by the board of directors. (See fig. 8 for an example of the Midwest ISO’s committee structure. Midwest ISO’s primary committee is called the Advisory Committee.) RTO staff may facilitate discussions within the primary committee, as well as lower-level committees and working groups, and may also prepare analyses to help stakeholders understand how a decision might affect them. For example, as agreed to when its RTO status was approved, Southwest Power Pool must develop a cost-benefit analysis before making the decision to implement a new market rather than relying on cost-based pricing of a service. Other RTO officials told us that although they may develop formal cost-benefit analyses for some major decisions, such as changes to key market rules, the stakeholder process is a key way in which the cost and benefits of a decision are discussed. Most RTOs have a specific lower-level committee to review and analyze RTO budgets that contain information about proposed expenses. According to RTO officials, RTOs and stakeholders discuss and jointly determine organizational priorities, which influence the RTO’s preparation of a draft budget. Stakeholders serving on the budget committee review the budget’s proposed expenses and provide recommendations. Discussion of the budget is then taken up by the primary stakeholder committee, which then votes whether to recommend to the board that the budget be adopted. The composition of the subcommittee that initially reviews the budget differs among the six RTOs. For example, PJM’s budget committee consists of equal representation from each formal stakeholder group plus two members of the independent board. ISO New England’s budget committee is open to participation by any stakeholder. Most stakeholders we spoke with in the two RTOs we visited—ISO New England and Midwest ISO—valued the opportunity for discussion with their respective RTOs and believed that RTOs facilitate an open and democratic process that focuses on reaching consensus among stakeholders. However, most stakeholders in these two RTOs found the process resource intensive, specifically the stakeholder meetings, which require staff time and travel costs. RTOs may carry out hundreds of stakeholder meetings annually, as shown in table 4. Stakeholders must prepare for meetings by reviewing documentation and preparing comments, and the ability of stakeholders we spoke with to do so varied significantly. Individual stakeholders in the two RTO regions we visited estimated they devoted a range of time—from less than one-half of a full-time equivalent to 5 full-time equivalents—to stakeholder involvement annually. In some cases, stakeholders told us they are not able to attend all meetings they would like to due to resource constraints. For example, stakeholders from ISO New England’s public power sector told us they often have to rely on other stakeholders to attend meetings in their place, because they lack the resources to participate themselves. Many stakeholders told us they believe the level of their participation determines their influence on RTO decisions. In the two RTOs we visited, many stakeholders representing and serving consumers, such as consumer advocates and state commissioners, were concerned that RTOs do not place adequate emphasis on assessing the implications on consumer electricity prices of decisions, such as whether to build new transmission lines, when to create markets for services in lieu of charging cost-based rates, and reliability decisions. Several of these stakeholders believed that RTOs overemphasize ensuring reliability without full consideration as to whether lower-cost options are available. For example, some ISO New England stakeholders we spoke to believed the RTO was overly conservative when determining whether noncompetitive generators were needed for reliability. They believed that, as a result, the RTO entered into unnecessary and costly contracts to keep these inefficient generators running. They observed that this could lead to higher consumer electricity prices, which they did not believe were justified, since they did not agree the generators were needed to ensure electricity was delivered reliably. Moreover, one stakeholder we spoke to was concerned that the cost of operating these generators, which may benefit only certain local areas, were unfairly borne by consumers outside those local areas. Officials from ISO New England acknowledged that there can be trade-offs between reliability and costs, but said transmission-planning efforts and their new capacity market are effective in keeping payments for reliability as low as possible. They and other RTO officials explained that fulfilling their mission of ensuring reliability and efficient markets will minimize consumer prices in the long run. A number of stakeholders representing and serving consumers in these two regions were concerned, however, that the RTOs do not conduct enough cost- benefit analyses of how decisions may affect electricity prices. Others felt they had inadequate access to data and resources to conduct such analyses themselves. Some RTO officials told us that while they always consider the costs and benefits of a decision before making it, formal cost- benefit analysis may not always be practical, because it is difficult to estimate the potential impact of a decision on electricity prices, how benefits and costs could change over time, the appropriate assumptions to be made, and how different stakeholders are affected. They noted that individual stakeholders already give much consideration to the costs and benefits of a given decision when discussing it during stakeholder meetings. There was disagreement among stakeholders in ISO New England and Midwest ISO about which groups have, and should have, more influence with RTOs; however, many stakeholders agreed that participating in stakeholder meetings and, in particular, participating in lower-level committees and working groups, provided the best opportunity to influence RTOs’ proposed expenses and decisions that may affect electricity prices. Although most stakeholders we spoke with thought ISO New England and Midwest ISO worked hard to solicit comments from all stakeholders, many believed that when making decisions, the RTOs deferred more to certain stakeholders and that because RTOs were created through the voluntary agreement of the transmission owners, the RTOs were more likely to defer to their interests than to others’. Other stakeholder groups we spoke with in ISO New England and Midwest ISO commented that state regulators have a large influence on the RTOs’ decisions. A number of state public utility commission officials disagreed with this view. In particular, one state regulator stated that because state regulators are charged with protecting the public interest, their opinions should carry greater weight than those of participants whose interests are primarily profit-oriented. The frequency of FERC’s review of proposed RTO expenses varies, with reviews of certain expenses not being conducted for years at a time. FERC’s review of proposed expenses occurs when it conducts a proceeding to evaluate whether the rate an RTO charges customers to recover these expenses is just and reasonable and not unduly discriminatory or preferential. Because of variation in the manner and frequency with which rate proceedings are conducted, FERC’s consideration of proposed RTO expenses can be infrequent. For example, in 2001, FERC conditionally approved Midwest ISO’s rate for recovering expenses associated with administering its tariff and ensuring reliability. Because Midwest ISO has not since asked to change its rate for recovering these expenses, FERC has not reviewed these expenses since 2001. FERC officials explained that more frequent review of proposed RTO expenses is not necessary because RTO expenses and decisions undergo much scrutiny during the RTO stakeholder process. Moreover, according to these officials, stakeholders are in the best position to know whether RTO expenses are prudent and reasonable. As a regulator, FERC may initiate a new rate proceeding if it believes an RTO’s rates are no longer just and reasonable. While, as FERC points out, stakeholder comments and complaints are an important piece of FERC’s consideration, more frequent review of proposed expenses could also aid FERC in determining whether a rate remains just and reasonable. Table 5 shows when each RTO’s rate for recovering expenses was last approved. RTOs annually develop budgets that contain extensive information on proposed expenses; however, FERC’s use of RTO budgets as a tool in reviewing proposed RTO expenses varies. For example, ISO New England agreed with its stakeholders to submit operational and capital budgets to FERC for annual approval. Southwest Power Pool submits annual copies of its operating and capital budgets for informational purposes, rather than for FERC approval. The other RTOs either do not submit budgets or do so infrequently, despite the fact that these budgets could provide FERC with potentially valuable information about proposed RTO expenses that could help it in ensuring the rates RTOs charge customers are just and reasonable. For example, FERC could use such information to regularly benchmark RTO spending on key categories, such as market oversight or capital investments. (Table 6 outlines the frequency with which RTOs submit budgets to FERC for review.) FERC officials pointed out that FERC staff sometimes attend stakeholder meetings, including discussions about the budget, to observe what concerns stakeholders raise. They also noted that RTOs post their budgets on their Web sites annually, allowing FERC and the public to view them if so desired. Some representatives of stakeholder groups including public utility commissions, consumer groups, and the publicly owned sector expressed concerns over FERC’s infrequent review of budgets or lack of independent analysis of proposed RTO expenses. They expressed concern that FERC deferred too much to the stakeholder process within the RTOs, assuming stakeholders had adequately resolved all concerns. These stakeholders were concerned that without more scrutiny of proposed expenses, FERC could not be sure that the RTOs were as cost-effective as possible. We found that RTO expenses may change over time, and some—such as expenses for outside consultants––may decrease between the times FERC reviews the rates. Furthermore, without more consistency in how FERC reviews proposed expenses, customers may not fully benefit from potential improvements or efficiencies RTOs achieve. For example, for the 2008 Midwest ISO budget, expenses as approved by the finance subcommittee and the board of directors for outside services decreased by 24.4 percent, while its net operating expense increased by 1.2 percent. The total cost of salaries and benefits increased by 10 percent, offsetting some of the increased efficiency in the area of outside services. In the stakeholder process for the 2007 budget, the finance subcommittee expressed concerns about the continued increase in staffing levels and how that need was determined. They recommended that Midwest ISO develop financial metrics to evaluate and compare and contrast Midwest ISO’s financial results. Since Midwest ISO’s proposed expenses were not regularly reviewed by FERC, FERC may have missed an opportunity to determine whether Midwest ISO’s salaries were reasonable and ensure that Midwest ISO customers benefited from lower outside service expenses. More broadly, without regular, recurring analysis of RTO expenses, such as through review of RTO budgets, it is not clear that FERC is as well positioned as it could be to know whether certain expenses are reasonable and RTOs are as cost-effective as possible. Such knowledge could supplement comments from stakeholders and help FERC determine whether rates remain just and reasonable or when a new rate case should be initiated. FERC does not routinely review or assess the accuracy or reasonableness of expenses RTOs report annually using the Form No. 1. FERC officials told us they use the financial information in the Form No. 1 to carry out FERC’s responsibilities and post this information to their Web site for use by public utility customers, state commissions, and the public so that they can assess the reasonableness of electric rates. However, during the course of our work, FERC officials told us they did not routinely audit or review the Form No. 1s for accuracy or completeness. When we began our work, FERC had not audited any RTO FERC Form No. 1 filings for accuracy or completeness, although in 2004 it performed some limited review of the Form No. 1s during the course of other audits. In May 2008, FERC initiated an audit of Midwest ISO that includes a more in-depth examination of its Form No. 1. FERC officials told us it is the RTOs’ responsibility to ensure that the FERC Form No. 1 filings are accurate and complete and said that it requires public accounting firms to attest that they have audited RTOs’ balance sheets, statements of income, retained earnings, and cash flows contained in their Form No. 1s in conformity with FERC’s Uniform System of Accounts requirements. Auditor opinions confirm that CPAs audit the above statements in the Form No. 1 but may not audit all supporting schedules. Without more regular audits and review of actual expense information for accuracy, FERC may be at risk of unknowingly using and providing to the public inaccurate and incomplete RTO financial data, limiting the effectiveness of the Form No. 1 as a tool for determining whether rates are just and reasonable. For example, during the course of our audit work, we noted a significant reporting error on Southwest Power Pool’s 2006 Form No. 1 filing. In 2006, Southwest Power Pool reported $88 million in rent and $175 million in maintenance of general plant expenses; however, we noted actual rent and maintenance of general plant expenses were $830,000 and $440,000, respectively. FERC officials said that in 2006 several RTOs experienced problems using FERC’s software program to file their Form No. 1s, due to an unforeseen delay in implementing software updates. To correct the errors, a revised schedule was added to Southwest Power Pool’s 2006 Form No. 1 filing. However, maintenance of general plant expenses was still overstated in the revised schedule by approximately $3 million, and the revised schedule was not clearly referenced by the original schedule. FERC said the error did not affect electricity rates; however, the overstated expense information remained posted on FERC’s Web site for over a year, where public utility customers, state commissions, the public, and other parties that may be interested in reviewing RTOs’ expenses could access it. In August 2008, Southwest Power Pool submitted a revised FERC Form No. 1 that corrects the error. Furthermore, according to FERC officials, the Office of Enforcement is taking steps to incorporate a system of electronic data validation checks into the FERC Form No. 1 submission software to help ensure the accuracy of the FERC Form No. 1 filings before they are submitted. FERC anticipates having the validation checks in place for the 2008 FERC Form No. 1 submission year and told us that once the checks are implemented, an error like the one identified at Southwest Power Pool can be corrected prior to the entity submitting its FERC Form No. 1 filing. Because these checks have not yet been implemented, we cannot review their effectiveness. We believe that while they will likely help identify and correct some reporting errors, they do not constitute the comprehensive review of the Form No. 1s for accuracy and completeness that FERC staff could perform through audits or other review. FERC does not routinely review RTOs’ reported expenses to ensure that they are reasonable, noting that Form No. 1 information on expenses is made public and interested parties can file a complaint about their concerns. FERC officials from the Office of Energy Market Regulation observed that the Form No. 1 might sometimes be used to detect potentially unreasonable expenses but told us they do not analyze them due to limited resources. Moreover, although FERC compared expenses across RTOs in 2004 as a means to estimate the potential expense involved in creating new RTOs, FERC officials do not regularly compare expenses across RTOs or create expense benchmarks to use as an analytical tool in evaluating just and reasonable rates or as a way of determining whether efficiencies realized by one RTO could be applied to another. FERC and RTO officials said that the varied nature of RTO functions would make regular comparison of actual RTO expenses challenging and of limited value. Several stakeholders we spoke with, including a former RTO executive, disagreed, observing that comparisons among RTOs could help raise questions about the appropriateness of expenses. Without reviewing actual RTO expenses for reasonableness, FERC may not be as well positioned as it could be to ensure the rates RTOs charge to recover their expenses are just and reasonable and that RTO funds were spent according to how FERC and the stakeholders approved them to be. FERC relies heavily on stakeholders to raise concerns about RTO expenses and other decisions with the potential to affect electricity prices. FERC officials acknowledged that the process through which RTO stakeholders review information on proposed expenses contained in RTO budgets is integral to identifying imprudent and unreasonable expenses between RTO rate cases. Parties who disagree with RTO expenses can file comments when an RTO’s rate for recovering these expenses is being evaluated at FERC during rate-setting proceedings. In one instance, in November 2005, the Attorneys General of Connecticut and Massachusetts submitted comments to FERC about ISO New England’s proposed 2006 budget, contesting executive salaries that they believed were unnecessarily high. FERC found the proposed salary expenses to be just and reasonable after reviewing the entire record in the proceeding, including all comments and ISO New England’s comments that surveys and benchmarks showed the salaries were competitive. However, FERC did not perform any independent analysis of ISO New England salaries or review the surveys or benchmarks ISO New England cited. FERC also did not conduct comparisons of salaries across RTOs, although FERC officials said that had this information been introduced into the record, it would have considered it. As with stakeholder review of proposed expenses, FERC officials told us the Form No. 1 is a tool to provide stakeholders with ready access to data needed to assess the prudence of actual RTO expenses, and that its information is key to stakeholders knowing when a new rate case may be needed. FERC also explained that stakeholders can file a complaint that rates are not just and reasonable at any time. However, several stakeholders told us that because FERC places the burden of proof on the complaining party, it is difficult and resource intensive to file a complaint. These stakeholders told us that they typically lack the staff and resources to file a complaint and said that it is difficult to obtain the data and conduct the analysis necessary to support it. For example, one state regulator noted that the data needed to show that expenses are not just and reasonable is typically proprietary and that such complaints are difficult to win, since the burden of proof is high. FERC officials confirmed that they have heard over the years that it can be challenging to make complaints and win. They said consumer groups sometimes felt they were at a disadvantage compared to transmission owners and generators because they have fewer resources, including staffing and funding, to file and support complaints. FERC officials also noted that if an evidentiary hearing was deemed necessary, their staff might provide some analytical assistance. As in its reviews of expenses, FERC also places much emphasis on the stakeholder process when reviewing RTO decisions with the potential to affect electricity prices, and FERC offers stakeholders the opportunity to provide additional evidence for its consideration prior to making a final decision. For example, in 2006, FERC conducted a proceeding related to a proposed PJM decision to develop a capacity market—a market designed to attract new generation and other resources to ensure PJM can meet future electricity needs. PJM’s proposal resulted from years of work and numerous stakeholder meetings. Additionally, PJM and numerous parties submitted thousands of pages of comments in support and against the proposed decision, which FERC evaluated. FERC issued a final order on this proceeding in December 2006. In May 2008, numerous stakeholders, including public utility commissions and consumer advocacy groups, filed a complaint with FERC alleging the initial model PJM used for establishing the price of capacity produced excessively high prices and did not deliver commensurate benefits. Complainants are asking for rate relief, which they estimate to be about $12 billion. The Maryland Office of the People’s Counsel calculates that excess charges to Maryland residential customers will average $570 over 3 years. FERC evaluated the merits of this complaint and supporting documents. On September 18, 2008, it dismissed the complaint but granted a request for a technical conference to determine if further action would better achieve this market’s goals. Experts, industry participants, and FERC lack consensus about whether RTOs have provided net benefits to consumers. Many key experts and industry participants agree that RTOs can provide certain benefits, such as more efficient management of the transmission grid and improved access by independent generators. However, there is some disagreement about whether RTOs’ access to additional lower-cost generating resources has led to electricity prices for consumers that are lower than they otherwise would have been. Furthermore, experts and industry participants are divided on the benefits of RTO markets and their effect on consumer electricity prices. Some critics of RTO markets believe that RTO markets have not fully achieved anticipated benefits and contribute to higher consumer electricity prices, while proponents believe RTO markets have kept prices lower than they otherwise would have been. Some RTOs have developed assessments to demonstrate the benefits they have provided to their regions. FERC officials share the view that RTOs have resulted in benefits to the economy, such as new efficiencies in operating the regional transmission grid, but FERC has not conducted an empirical analysis to measure whether these benefits were realized or developed a comprehensive set of publicly available, standardized measures that can be used to evaluate RTO performance. Many industry participants and experts agree that RTOs provide opportunities for more efficient management of the transmission grid and can improve access by independent generators. They believe that because RTOs integrate multiple transmission systems into a larger service area, they have broader knowledge of the grid’s transmission capacity and wider perspective on events that can affect reliability, allowing them to more efficiently manage the grid. For example, Midwest ISO now centrally controls operation of a vast transmission network spanning 15 states that was once overseen by 24 different system operators who had to work together to address any reliability problems such as the unexpected loss of a key transmission line or power plant. Some also believe that because RTOs integrate multiple transmission systems into a larger service area, they keep electricity buyers and sellers from paying multiple fees for each transmission network they use—previously a disincentive to trade power across multiple utilities’ transmission systems. In addition to the benefits of centralized management of the transmission grid, many experts and industry participants believe RTOs have improved independent generators’ access by reducing discrimination. They note that because RTOs operate the grid independently and do not own generation or transmission resources themselves, they have no incentive to discriminate when providing transmission access. According to a representative of independent developers of new generation we spoke to, this improved access has allowed new generators to more easily connect to and use the transmission system. A representative of buyers of power, on the other hand, told us this improved access has allowed buyers of power opportunities to purchase electricity from new suppliers, although this representative questioned whether the prices they receive for that electricity are better. Despite much agreement that RTOs have provided opportunities for more efficient management of the transmission grid and improved access, some industry participants we spoke with believed RTOs were not the only way to provide these benefits. They question whether similar benefits could be achieved using other mechanisms, such as power pools—groups of utilities that have entered into agreements to coordinate electricity supply, like those that have existed along the East Coast for more than 30 years. Many experts and industry participants agree that RTOs are better positioned than individual utilities to make use of lower-cost generators more frequently, although they do not agree whether this has resulted in electricity prices for consumers that are lower than they otherwise would have been. By overseeing a region formerly run by many individual utilities, RTOs have more generators at their disposal than the individual utilities did. Because RTOs generally use the generators with the lowest bid first—according to some, the least costly and most fuel efficient—they may be able to more efficiently meet requirements for electricity reserves, lower the cost of producing electricity, and use fuel more efficiently. However, some industry participants we spoke with questioned whether this has kept electricity prices for consumers lower than they otherwise would have been. They noted that generator bids may not always reflect their costs of production and that in some cases, lower costs of production have led to higher profits for generators rather than lower consumer prices. Experts and industry participants are divided on whether RTO efforts to create and oversee markets have lowered electricity prices and led to other benefits, such as improved generator efficiency and more investment in electricity infrastructure. Studies of restructuring draw differing conclusions. Experts and Industry Participants Are Divided on RTOs’ Influence on Electricity Prices Experts and industry participants debate how RTO markets have influenced the prices consumers pay for electricity. Critics of RTO markets believe these markets have not fully achieved anticipated benefits and have contributed to the higher prices for electricity seen by consumers, because markets are expensive to establish and operate, and as currently designed, produce higher wholesale prices than would otherwise occur. RTO markets use multiple types of generators—coal, nuclear, natural gas, and others—in satisfying consumer demand, and the different costs of fuels for these generators, among other factors, contribute to different costs of electricity production. RTO markets select the smallest amount of generating resources needed each day to provide reliable service. To do so, these markets generally rank and accept generator bids in the market in order of lowest to highest and pay generators, regardless of their costs of production or fuel, the price bid by the last generating unit needed to satisfy demand. Critics believe this pricing approach reduces the benefits for consumers of using varied types of generators, because low-cost generators, like nuclear and coal plants, receive the same price as higher-cost generators, like natural gas plants, when higher cost generators are needed to satisfy demand. Supporters of RTOs believe this pricing approach, by rewarding low-cost generators, promotes efficiency and provides an incentive for new low-cost generators to enter the market, leading to lower prices in the long run than otherwise would have been the case. They note that price transparency in RTO markets is valuable and can signal profit-making opportunities for potential new entrants. They believe that this, coupled with improved access to the grid, can encourage market entry by, among others, developers of renewable energy sources, such as wind power. Proponents of RTO markets observe that price transparency may also encourage demand response—consumers lowering electricity usage in response to price signals—which can lead to lower, less volatile prices. RTO officials explained that while RTO markets establish wholesale prices for electricity traded in them, a number of other factors also influence the price consumers ultimately pay. Furthermore, much electricity is supplied from sources outside RTO markets, for example, when utilities use their own generators to self-supply or when two parties directly negotiate a transaction with each other. However, critics believe that the pricing approach used by RTO markets has led to higher prices for directly negotiated contracts as well, because low-cost generators recognize that they can often receive the price bid by higher-cost generators in the RTO marketplace. A state-by-state analysis of electricity prices reveals differences between RTO and non-RTO regions that have likely led to concerns about the impact of RTO markets on electricity prices. We considered retail electricity prices in four regions of the country: (1) original RTO states— states that joined an RTO in 1999 or earlier and were historically in a power pool, (2) new RTO states—states in an RTO region after 1999, (3) non-RTO states—states outside RTO regions, and (4) California. As shown in figure 9, 11 of the 17 states with above-average retail electricity prices are in the original RTO group. California also had above average prices in 2007. To further understand the basis for these disagreements, we analyzed retail electricity prices for industrial customers, because we believe that trends in industrial prices more closely reflect trends in wholesale prices, which RTOs are most capable of influencing. However, this relationship is not perfect, because, as noted earlier in the report, many other factors influence retail prices. Furthermore, numerous wholesale transactions occur outside RTO markets. As shown in figure 10, inflation-adjusted electricity prices for industrial consumers have been consistently higher in the original RTO states than in the new and non-RTO states over the entire period. Prices in the original RTO states fell from 1990 to 1999 but have since risen close to prior levels. However, in recent years, the rate of price increases in the original RTO states has generally been higher than in the non-RTO states. It is important to note that this price analysis does not isolate the impact of RTOs on prices. It is not possible to draw conclusions about what impact the establishment of RTOs has had on electricity prices without properly accounting for and isolating the impacts of other factors, such as the cost of fuels used to generate electricity, changes in the fuel mix, and changes in consumer demand. Experts generally agree that fuel prices play a large role in determining electricity prices. However, they disagree about the magnitude of their influence. Prices for fuels commonly used to generate electricity—such as coal and natural gas—have increased in recent years, with prices of natural gas rising more dramatically than those for coal over this period. Figure 11 illustrates how average prices of fuels used in the electricity sector have changed from 1996 through 2006. Compounding this overall trend, the original RTO region tends to rely more heavily on natural gas than the non-RTO region. Proponents of RTOs acknowledge that consumer electricity prices have increased in RTO regions, but they believe that higher fuel prices, greater demand for electricity, increasing costs for infrastructure needed after years of underinvestment, the high costs of complying with environmental regulations, and regulatory decisions made by states about transmission and distribution rates are the principal reasons for rising electricity prices across the country and in RTO regions. They believe RTO markets have kept prices to consumers lower than they otherwise would have been. Critics of RTO markets disagree, observing that problems with RTO markets have exacerbated the effect of other factors, such as higher fuel prices, on electricity prices. Experts and Industry Participants Disagree on RTOs’ Influence on Generator Plant Efficiency Experts and industry participants are also divided about the ways in which RTO markets may influence how efficiently existing plants are used. Some believe prices established competitively in RTO markets have given generators an incentive to improve the maintenance and operation of their facilities and operate them a greater percentage of the time, thereby improving efficiency and lowering the overall cost of generating electricity. By operating plants more efficiently, generators can better compete against rival bidders, resulting in either greater profits for themselves, lower prices to consumers, or both. Some studies conclude that nuclear plants in RTO and restructured regions have increased their capacity factors—the electricity generated by a plant as a percentage of that plant’s maximum capacity to generate electricity. As seen in figure 12, our analysis illustrates that nuclear plant capacity factors show more pronounced improvement in recent years in the original RTO states and new RTO states than in the non-RTO group. We did not attempt to account for other potential causes for this improvement, such as technological or institutional factors that may have improved efficiencies prior to the advent of restructuring and RTO markets or determine whether aggregate trends were the result of widespread efficiency improvements or a few improved generating units. While many agree that the results of capacity factor analysis would inform discussions of the benefits of RTO markets, they do not agree on how to isolate the influence of these markets and restructuring on capacity factors or determine whether improvements preceded restructuring changes or resulted from them. Some experts and industry participants believe improved generator efficiency at existing plants benefits consumers because it reduces the need to construct new generating plants and allows less expensive generating options, such as previously constructed nuclear plants, to satisfy a greater portion of electricity demand. Others question the role of RTO markets and restructuring in improving nuclear plant generator efficiency and whether efficiencies have resulted in lower prices for consumers than would have otherwise occurred. Experts and Industry Participants Disagree about RTO Influence on Infrastructure Investment There is also disagreement about whether RTOs have led to other regional benefits, such as increased construction of transmission and generation infrastructure. For example, some industry participants and experts believe a practice a number of RTOs employ of pricing electricity differently at various locations in a region to reflect the costs associated with transmission congestion provides valuable signals by indicating where additional generation or transmission is needed. Some critics, however, charge that this method of pricing electricity has not produced the expected investment in transmission and generation in the locations where it is needed. Furthermore, they believe this practice, combined with what they characterize as limited competition in RTO markets, allows generators to keep their bids high and earn excess profits. Studies of Restructuring and RTOs Draw Differing Conclusions In order to weigh in on these issues, a number of academics and private consulting firms have conducted studies about the benefits of restructuring and RTOs and their effect on electricity prices, although their studies have drawn differing conclusions. Some of these studies seek to isolate the effect of restructuring and RTO membership from other factors, such as fuel prices, to determine whether restructuring and RTOs themselves have influenced prices and led to other benefits. We identify and describe in appendix VIII a selection of 13 studies that are representative of these varied conclusions. Several of the studies conclude that the formation of RTOs resulted in greater efficiencies in the electricity industry, significantly benefited local economies, and, in some cases, kept electricity prices lower than they otherwise would have been. Others conclude that RTO market design and operations have not kept prices to consumers lower, but rather have led to higher consumer prices and higher generator profits. As a way of addressing concerns about whether they have provided benefits, some RTOs have quantified the benefits they believe they have provided to their regions. ISO New England, for example, developed measures related to wholesale electricity prices, power production costs, emissions, and other areas to quantify the value it has provided to New England. According to ISO New England, average wholesale electricity prices in its region, when adjusted for rising fuel costs, have declined from $45.95 per MWh in 2000 to $42.64 per MWh in 2006. ISO New England reports that over this same period, non-fuel-adjusted prices rose from $45.95 per MWh to $62.74 per MWh. Midwest ISO also recently developed an initiative to quantify its performance. According to its analysis, Midwest ISO has improved electric service reliability and is more efficiently using generation resources, a fact that, along with other factors, has contributed to between $555 million and $850 million in annual net benefits. Midwest ISO is currently soliciting comments from stakeholders on its analysis. We did not analyze or validate either of these efforts. FERC officials believe that RTOs have resulted in benefits to the economy, such as new efficiencies in operating the regional transmission grid; however, it has not conducted an empirical analysis or developed a comprehensive set of performance measures to analyze these benefits. FERC officials told us they consider RTO benefits when they review proposals to create RTOs and approve RTO decisions, such as new markets for electricity and other services. FERC also recently initiated a proceeding to consider specific reforms to RTO markets—for example, considering how to strengthen market monitoring and increase opportunities for long-term power contracts. FERC believes RTOs have produced numerous benefits, including the following: improving the efficiency of the regional transmission grid, including resolving operating problems such as transmission congestion; providing more efficient transmission pricing policies; and minimizing market power; improving transmission reliability by facilitating more accurate calculations of regional transmission capacity; improving access to the grid by reducing opportunities for discriminatory transmission practices; improving competition in regional power markets by facilitating the entry of new independent generators; facilitating stakeholder consensus solutions to regional problems; enhancing transparency and oversight regarding how prices are determined and how access to the grid is granted; and providing a process of regional transmission planning, thus resulting in more efficient planning and use of resources across a region, as well as an opportunity for input by a broad range of stakeholders. However, FERC has not conducted an empirical analysis to measure whether RTOs have achieved these expected benefits or how RTOs or restructuring efforts more generally have affected consumer electricity prices, costs of production, or infrastructure investment. FERC believes data exist to support its conclusion that RTOs have provided benefits—for example, data illustrating changes in generating capacity in RTO regions and data about the number of transmission interruptions used by system operators to address congestion. However, FERC has not used these or other available data to analyze whether RTOs have produced benefits. Furthermore, FERC has not reexamined its prospective estimate of the benefits RTOs were expected to produce—estimated in 1999 at $2.4 billion annually in cost savings—to determine whether these expected benefits are actually being realized or how actual outcomes have differed from original estimates. Some of the projections used to develop this estimate were too conservative, indicating that the estimate is not as reliable as it could be. Rather than incorporating a range of assumptions about future fuel prices to account for uncertainty, the model used one set of fuel price projections that turned out to be lower than what actually occurred. For example, the model’s projections assumed the average price of natural gas delivered to electric generation plants in the United States would rise to $3.25 per million British thermal units (Btu) by 2005. In fact, the actual price rose much faster, reaching $8.50 per million Btu in 2005. Similarly, the model assumed that U.S. electric generation capacity using natural gas and oil as fuel would increase from about 230,000 megawatts in 1997 to about 284,000 megawatts in 2005, but in fact, U.S. electric generation capacity rose to about 440,000 megawatts. FERC officials acknowledge that some of the study’s assumptions were low but maintain that RTOs have provided benefits. Although FERC collects a wide range of data from the RTOs, it has not developed a report or other assessment with comprehensive, standardized measures that Congress and the public could use to identify and track RTO performance. FERC has taken a step in this direction by developing a nonpublic document that provides some standardized measures of RTO market performance, and these measures are also addressed in public reports issued by the RTOs. However, FERC officials explained that these measures were not intended to be used to assess RTO benefits or evaluate the performance of individual RTOs. Moreover, they are not comprehensive, since they do not address the extent to which RTOs have achieved the full range of expected benefits—such as improved reliability, more efficient planning for generation and transmission investments, or prices for consumers that are as low as possible—and do not compare performance between RTO and non-RTO regions. FERC also includes some statistics about RTOs on its Web site and in its annual report on the electricity industry, but these data are of limited scope and do not contain measures of operational and market performance. The RTOs themselves publish large volumes of data about market and operational performance in publicly available annual reports and other documents available on their Web sites; however, the large amount of information and, in some cases, its lack of standardization, make it difficult for the public or Congress to easily compare and interpret it. Moreover, FERC has not synthesized these data in a way that allows Congress and the public to draw conclusions about the benefits of RTOs and their effectiveness or discern whether RTOs and organized markets are in their best interest. According to FERC officials, quantitative analyses of whether benefits were achieved and identification of performance measures are not a necessary part of its oversight of RTOs. Rather, FERC officials believe FERC’s continual review of RTO performance—through its evaluation of RTO decisions, proceedings about RTO market reforms, and market monitoring—is sufficient to ensure RTOs continue to benefit consumers as expected. Furthermore, FERC officials cited methodological challenges to performing an empirical analysis of whether benefits were achieved and developing performance measures, which it believes would limit their value. FERC officials also explained that RTO participation is voluntary, and that participants are able to assess for themselves the benefits of RTO membership and join or depart based on their own determination. Experts from the electricity industry and the academic community we spoke with acknowledged that empirical analysis and measures of RTO performance would be methodologically challenging to conduct. In particular, these experts noted that there are difficulties in isolating the influence of RTOs on prices, efficiency, and investment from other factors, such as fuel prices. However, these experts observed that tracking performance measures across RTOs would encourage better performance and could identify potential areas for improvement. Some added that, in certain cases, the same measures could be developed for non-RTO regions to provide points of comparison. These experts suggested measuring and providing standardized information to the public on market competitiveness, transmission and generation investment, plant efficiency, reliability, and changes in prices in RTO regions, among other things. Some industry groups have also called for the development of common measures of RTO performance, such as measures to track the difference between generator costs and prices charged in RTO markets, changes in congestion costs over time, and RTO costs of acquiring capital for major investments. Another industry group commissioned an independent study to identify and begin tracking standardized measures of RTO performance. GAO’s Standards for Internal Control identify the value to organizations of comparing actual performance to planned or expected results. More specifically, past GAO work recognizes that federal agencies can use performance information to identify problems in existing programs, develop corrective actions, and identify more effective approaches to program implementation, among other things. By developing standard performance measures that draw upon its own internal analysis or work being conducted by RTOs, industry experts, market monitors, and others, FERC could, over time, develop a more thorough empirical understanding of RTO performance and whether and to what extent RTOs have provided benefits to the industry and to consumers. This could help FERC in evaluating the success of the decision to encourage the creation of RTOs and understand whether RTOs have led to the benefits expected of them. Measures may also help FERC determine whether to encourage the creation of additional RTOs or identify areas where its RTO policy and RTOs themselves could be improved. Moreover, if available to Congress and the public, measures could allow FERC to weigh in on the disagreements among experts and industry participants about the benefits RTOs provide. It has been over 10 years since major federal electricity restructuring was introduced and some of the first RTOs were developed to facilitate it, yet there is little agreement about whether restructuring and RTOs have been good for consumers, how they have affected electricity prices, and whether they have produced the benefits FERC envisioned. Compounding this, rising electricity prices and diverse regional interests complicate an unbiased discussion of the merits of RTOs and restructuring. Although there are challenges to answering questions about the benefits of RTOs, a more structured and formalized approach to RTO oversight would be beneficial. FERC’s initial approach to allow a diverse range of RTO types, governance structures, and rate recovery mechanisms provided a means for regions to quickly build upon existing institutions like power pools and past participant experience working together. However, much has changed since the first RTOs came into existence, and it has become clear that FERC’s efforts to regulate RTOs as it does utilities may no longer be sufficient. Furthermore, the specific characteristics of RTOs devised by FERC and its expectation that these entities would lead to lighter regulation by FERC give RTOs a unique position in the electricity industry. Some RTO functions, such as operating the transmission grid, typically fell within the purview of utilities. Others, including market monitoring and balancing different stakeholder interests, were more traditionally performed by regulators. As a result of this unique set of responsibilities, RTOs face much public scrutiny—something RTOs have implicitly embraced in part through their varied stakeholder processes—and may require different oversight by FERC. Although stakeholders told us they value the stakeholder process at each of the RTOs, the concerns they raised about its resource intensiveness and the challenges involved in analyzing RTO decisions highlight the importance of FERC involvement and oversight. In this regard, without more regular, consistent review of RTO expenses and budgets, FERC may be missing an opportunity to better ensure the cost-effectiveness of RTOs and that their rates remain just and reasonable, even between rate proceedings. Furthermore, FERC’s lack of regular review of RTO financial reports, filed annually in the Form No. 1, limits its ability to ensure RTO expenses are accurately and completely reported and reassure Congress, industry participants, stakeholders, and the public that the billions of dollars in expenses RTOs have incurred in recent years were reasonable and spent in accordance with budgets previously approved. Finally, while FERC believes RTOs have produced numerous benefits, the fact that it has not developed a comprehensive set of publicly available standardized measures to track RTO performance contributes to uncertainty about what those benefits have been and their magnitude. We acknowledge that FERC’s review of RTO decisions that affect electricity prices and consideration of stakeholder comments and complaints sometimes results in new rules designed to improve the ability of RTOs to deliver benefits to their regions. However, in the absence of measures for evaluating the success of the decision to encourage the creation of RTOs, FERC may be missing opportunities to facilitate improvements in RTO operations and markets and is not as strongly positioned as it could be to evaluate the success of its decision to encourage the creation of RTOs and determine whether to encourage further RTO development. To help ensure that FERC, industry participants, and the public have adequate information to inform their assessment of whether rates to recover RTO expenses are just and reasonable, we recommend the Chairman of FERC take the following two actions: develop a consistent approach for regularly reviewing expense information contained in RTO budgets and routinely review and assess the accuracy, completeness, and reasonableness of the financial information RTOs report to FERC in their Form No. 1 filings. To provide a foundation for FERC to evaluate the effectiveness of its decision to encourage the creation of RTOs and help Congress, industry stakeholders, and the public understand RTO performance and net benefits, we recommend the Chairman of FERC take the following two actions: work with RTOs, stakeholders, and other experts to develop standardized measures that track the performance of RTO operations and markets and report the performance results to Congress and the public annually, while also providing interpretation of (1) what the measures and reported performance communicate about the benefits of RTOs and, where appropriate, (2) changes that need to be made to address any performance concerns. We provided FERC a draft of this report for review and comment. In a letter dated August 28, 2008, we received written comments from the Chairman of FERC. These comments are reprinted in appendix IX. We also received technical comments, which we incorporated into the report as appropriate. In his letter, the Chairman generally agreed with our report and its recommendations. We commend FERC for its interest in addressing the concerns we raised. The Chairman also provided comments in response to each of the recommendations and outlined plans to address them. Specifically: Regarding our first recommendation, that FERC develop a consistent approach for regularly reviewing expense information contained in RTO budgets, FERC agreed to increase its efforts to review RTO budgets and the reasonableness of RTO costs, and the Chairman has directed FERC staff to evaluate possible approaches for doing so. Regarding our second recommendation, that FERC perform additional review of the financial information in Form No. 1 filings, FERC indicated that, in addition to the one audit it has already begun, it plans to perform periodic audits of the financial information in Form No. 1 filings in the future. Regarding our third and fourth recommendations, that FERC work with RTOs, stakeholders, and other experts to develop standardized measures that track the performance of RTO operations and markets and report on those measures to Congress and the public, the Chairman noted that FERC is considering appropriate procedures for developing such measures and how best to report them. Regarding reporting, the Chairman observed that RTO “State of the Market” annual reports may be a vehicle for providing data and additional information to the public on RTO performance. While we agree that these annual reports of data on RTOs could be helpful for providing the public with additional performance information, we urge the Commission to consider what role it can play in helping Congress, industry stakeholders, and the public interpret and evaluate data and other information from RTOs in order to draw conclusions about RTO performance and value. It is clear that electricity markets and RTO operations are complex. FERC’s expertise and independence make it well positioned to help Congress and others assess RTO performance and net benefits, and its oversight authority gives it the ability to use this information to encourage continued improvement. The Chairman also expressed uncertainty about whether annual evaluation of results and recommendations for change was feasible or cost-effective. We recognize that FERC must balance numerous responsibilities and that the extent of its evaluation of RTO performance may vary from year to year. However, we believe significant value could be realized from (1) providing Congress and others with a consistent, annual source of data for tracking the performance of RTOs and (2) ongoing analysis of performance information and consideration of how it could aid FERC in carrying out its RTO responsibilities. Finally, along with its general agreement with our recommendations, FERC provided two clarifying comments. The first clarifies FERC’s role in approving RTO procedures for planning transmission infrastructure, and we incorporated this comment into our report. In the second, FERC commented on a statement in our draft report’s conclusions that RTOs are in a position of greater public trust than utilities. FERC observes that all utilities have a position of public trust and that a number of utilities are responsible for administering transmission systems that are as large as or larger than those of some RTOs. We agree that all utilities carry out important activities in the public interest that necessitate vigilant regulatory oversight and acknowledge that a number of large utilities exist. However, we also recognize that FERC had a number of unique expectations for RTOs that it did not have for utilities, believing the creation of RTOs could lead to lighter regulation by FERC. For example, FERC expected RTOs to assist it in its oversight of the electricity industry through, among other things, their market monitoring activities and the stakeholder process in which market development and other issues are discussed and potentially resolved without resorting to FERC’s complaint process. It is for these reasons that we believe FERC should take certain regulatory steps specific to RTOs like those we recommend in our report—for example, evaluating RTOs using performance measures—in order to improve RTOs and educate the public on their performance. However, in response to FERC’s comments, we revised the report’s conclusions to emphasize the unique role of RTOs and avoid relative comparisons of trust between RTOs and utilities. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees; the Chairman of FERC; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web Site at http://www.gao.gov. If you or your offices have any questions about this report, please contact me at (202) 512-3841 or gaffiganm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix X. At the request of the Chairman and Ranking Member of the Senate Committee on Homeland Security and Governmental Affairs, we reviewed (1) Regional Transmission Organizations’ (RTO) key expenses and investments in property, plant, and equipment; (2) how RTOs and the Federal Energy Regulatory Commission (FERC) review RTO expenses and decisions that may affect electricity prices; and (3) the extent to which there is consensus about what benefits RTOs have provided. Our review focused on the six RTOs in FERC’s jurisdiction—California Independent System Operator (ISO), ISO New England, Midwest ISO, New York ISO, PJM Interconnection (PJM), and Southwest Power Pool. To determine the total expenses incurred by RTOs from 2002 to 2006, the most recent data available when we began our review, and their key investments in property, plant, and equipment, we reviewed independent public auditor reports over this period, as well as full-time-equivalent personnel and transmission volume as reported to us by the RTOs. We summarized RTO expense, personnel, and transmission volume and property, plant, and equipment balances by RTO, and calculated average salary and related benefits per full-time equivalent and total expenses per megawatt hour (MWh) from 2002 through 2006 for each RTO. Our analysis reflects total annual expenses as reported in the RTOs’ annual audited financial statements. We did not retroactively apply financial statement reclassifications to data from prior years. In addition, RTOs utilized differing billing methodologies, and consequently, the rates they charged to market participants may be different from the total expenses per MWh calculated in our analysis. To illustrate the total amount of investments in property, plant, and equipment as of December 31, 2006, we used total property, plant, and equipment in our analysis without reducing those amounts by accumulated depreciation. We also reviewed 2006 RTO FERC Form No. 1 filings, the most current available at the time of our audit, to determine the amount of RTO expenses attributable to transmission expenses and regional market expenses, as well as administrative and general expenses. Independent public auditor reports did not aggregate expenses by these categories. We adjusted all expense amounts for inflation utilizing 2007 as the base year. To determine how FERC and RTOs review RTO expenses and decisions and discuss other aspects of RTO costs and benefits, we collected general information, interviewed representatives from the six RTOs, and spoke to the ISO/RTO Council about how FERC and the RTOs review proposed budget expenses and consider how RTO decisions affect electricity prices. For two RTOs—ISO New England and Midwest ISO—we collected more in-depth information and interviewed stakeholders from each of the major stakeholder sectors. We selected these two RTOs because they are multistate and perform a breadth of functions and services, but also reflect geographical and historical differences. For example, ISO New England evolved from a power pool; Midwest ISO did not. We interviewed state agency officials from these RTO areas, including state regulatory agencies (such as the Connecticut Department of Public Utility Control, Illinois Commerce Commission, Indiana Utility Regulatory Commission, Maine Public Utilities Commission, and Massachusetts Department of Public Utilities), state consumer agencies (such as the Connecticut Office of Consumer Counsel and Maine Office of the Public Advocate), and state regulatory associations (such as the Organization of MISO States, National Association of Regulatory Utility Commissioners, and the New England Conference of Public Utility Commissioners). We also interviewed representatives from each of these RTOs’ stakeholder groups to understand how FERC and RTOs review RTO decisions and expenses. We interviewed officials from the North American Electric Reliability Corporation to understand their interaction with RTOs. We spoke with officials from FERC’s Office of Enforcement and Office of Energy Market Regulation and reviewed related documentation that outlined FERC’s steps to review RTO expenses for reasonableness and accuracy. We reviewed selected FERC rate proceedings to better understand the type of information provided to FERC about proposed RTO expenses and the analysis it performs. We also considered FERC’s process for reviewing actual expenses as reported in FERC Form No. 1 filings and reviewed FERC audits of RTOs conducted in 2004 which focused primarily on governance. While we generally reviewed FERC’s oversight of RTOs, we did not perform an in-depth analysis of FERC’s review of specific RTO decisions. Finally, to address the extent to which there is consensus about what benefits RTOs have provided, we interviewed FERC officials and reviewed related documentation, including FERC’s 1999 prospective assessment of RTO expected benefits. We interviewed several experts in the field of electricity restructuring to discuss their opinions on the benefits and costs of RTOs and their assessment of the adequacy of FERC’s analysis of RTOs to date. These included experts from the Analysis Group, Cornell University, Northeastern University, Penn State University, the University of California Berkeley, and Vermont Law School. We chose experts affiliated with academic institutions and research firms with extensive knowledge of electricity restructuring and RTOs. We selected experts with a balanced range of views about the economic benefits of RTOs. We also interviewed a number of industry participants, including representatives from electricity industry associations and consumer organizations, such as the American Public Power Association, Compete Coalition, Consumer Federation of America, Electric Power Supply Association, Edison Electric Institute, Electricity Consumers Resource Council, Industrial Energy Consumers of America, National Rural Electric Cooperative Association, and Public Citizen to more fully understand where there was agreement and disagreement about the costs and benefits of RTOs. We reviewed reports and analyses from these and other industry participants that discussed the costs and benefits of RTOs. We also reviewed expert studies on the economic effects of restructuring and competition in the electricity industry and electricity consumers. In deciding which studies to include in our summary table, we selected some studies that were sponsored by both advocates and critics of the existing RTOs, as well as studies that are more academic in nature. Some of these studies specifically addressed the impact of RTOs on electricity costs and prices, while others addressed the impacts of restructuring and competition more generally, without specifically isolating the impact of RTOs. We conducted basic analyses of data on electricity prices, intensity of the use of generation resources (capacity factors), and type of generation resources (by fuel use). For the analysis of prices and capacity factors, we divided states into four categories: (1) original RTO states— states joining an RTO in 1999 or earlier and historically in a power pool, (2) new RTO states—states joining an RTO region after 1999, (3) non-RTO states—states outside RTO regions, and (4) California. The original RTO states category included Connecticut, Delaware, Massachusetts, Maryland, Maine, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, Vermont, and the District of Columbia. The new RTO states category included Iowa, Illinois, Indiana, Kansas, Michigan, Minnesota, Missouri, North Dakota, Ohio, Oklahoma, Virginia, Wisconsin and West Virginia. The non-RTO states category included Alaska, Alabama, Arkansas, Arizona, Colorado, Florida, Georgia, Hawaii, Idaho, Kentucky, Louisiana, Mississippi, Montana, North Carolina, Nebraska, New Mexico, Nevada, Oregon, South Carolina, South Dakota, Tennessee, Utah, Washington and Wyoming. We placed California in a separate category because its electricity industry went through a turbulent restructuring process during part of the time period that we analyzed. We did not include Texas in our analysis, because most of the state constitutes a separate grid from the two other main grids in the United States and is largely unregulated by FERC. For the other three groupings, states that were partially in an RTO region were considered part of the region if electricity for most major cities was provided by a utility that participated in an RTO. Our analysis was based on electricity data obtained from the Energy Information Administration. For the price analysis, we used electric power retail sales and electric revenues data. We developed average price estimates by aggregating state-level data, dividing revenues by sales, and adjusting for inflation using the gross domestic product price index. We focus on the prices in the industrial sector because the retail portion of its electricity prices is typically smaller than the retail portion of residential and commercial electric prices. RTOs operate wholesale markets and do not determine the retail portion of electric prices. We also conducted a specific analysis of relative industrial electricity prices. A description of that analysis and our methodology is presented in Appendix VII. For the analysis of the intensity of the use of generation resources, we calculated capacity factors from Energy Information Administration state- level data on electric power generation capacity and actual generation. We also interviewed representatives from the Energy Information Administration to understand the type of data that agency collects related to estimating the benefits and costs RTOs. We conducted this performance audit from October 2007 to September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We provided a draft of this report to FERC for its review. FERC’s comments are reprinted in Appendix IX. Appendix II: RTO Characteristics and Functions Required by FERC Order 2000 RTOs must be independent of control by any market participant and have the authority to propose rates, terms, and conditions of transmission services provided over the facilities they operate. An RTO’s employees must not have financial interest in any market participant. RTOs must serve an appropriate region of sufficient scope to maintain reliability, support efficient and nondiscriminatory power markets, and carry out their other functions. RTOs must have operational authority for all transmission facilities under their control. RTOs must have exclusive authority for maintaining the short-term reliability of the grid they operate. RTOs must administer their own transmission tariff—an agreement that outlines the terms and conditions of transmission service—and employ a transmission pricing system that promotes efficient use and expansion of transmission and generation facilities. RTOs must ensure the development and operation of market mechanisms to manage transmission congestion. These mechanisms should accommodate broad participation by all market participants and provide transmission customers with efficient price signals. RTOs must develop and implement procedures to address engineering and reliability problems caused by parallel path flows—a term that refers to electricity flowing over all possible transmission lines regardless of who owns the lines and what transmission contracts were agreed to. According to FERC, prior to RTOs many transmission owners found their grids overloaded by the actions of others because of this engineering reality. Since they were unable to determine the responsible party, these owners had to curtail their own use of their grid. RTOs must serve as the provider of last resort for ancillary services—services to maintain the reliable operation of the transmission system—and have the authority to decide the minimum required amounts of each ancillary service. RTOs must also ensure that transmission customers have access to a real-time balancing market. RTOs must be the single administrator for the Open Access Same Time Information System (OASIS) site—an Internet-based electronic communication and reservation system through which transmission providers provide information about the availability and price of transmission and ancillary services and customers procure those services. Furthermore, RTOs must independently calculate total and available transmission capacity—measures of the amount of electric power that the transmission system is capable of transferring from one point in the grid to another. RTOs must provide for objective monitoring of markets administered to identify market design flaws, market power abuses, and opportunities for efficiency improvements. RTOs must be responsible for planning and directing necessary transmission expansions, additions, and upgrades that will enable it to provide efficient, reliable, and nondiscriminatory service. In doing so, they must coordinate such efforts with appropriate state authorities and must encourage market-driven operating and investment actions for preventing and relieving congestion. RTOs must ensure the integration of reliability practices across regions. Appendix V: Inflation-Adjusted RTO 2006 Expenses Reported on FERC Form No. 1 (Dollars in thousands) Appendix VI: Investment in Property, Plant, and Equipment for RTOs as of December 31, 2006 (Dollars in thousands) As part of our effort to examine trends in state-level prices for industrial customers, we created indexes of prices at the state level. The indexes reflect the average of electricity prices paid by industrial customers, divided by the comparable national average price. As such, a state with an index greater than 1.0 would indicate that the state price was greater than the national average and vice versa. Such an approach focuses attention on how prices compare to the national average and how the different states’ standing relative to the national average changes over time. This approach also avoids the necessity of deciding which deflator is most appropriate for adjusting nominal electricity prices for inflation. To examine the trends in these indexes for the different regions of the country according to their RTO affiliations, we created weighted average indexes consistent with our RTO classifications described in appendix I. We chose to include Texas in this analysis for purposes of comparison. We obtained a weighted average by multiplying each state’s index for a given year by the share of its retail sales of electricity to industrial customers relative to its group’s total, and then summing up the resulting multiples for all the states in a given group. The results of this effort are reasonably consistent with the results of the basic price analysis reflected in figure 10 of the report. This analysis provides additional insights into price trends over the period of analysis. For example, it shows that from about 1997 through 2002, the original and new RTO states witnessed relative price decreases compared to the non-RTO group. Further, it appears that from 2002 through the most recent data in 2007, the original RTO states also witnessed relative price increases that effectively erased the decline in prices from 1997 through 2002. In this analysis, these prices (original RTO states) in 2007 are higher, in a relative sense, than they were prior to restructuring in 1997. Industrial prices in Texas, generally not overseen by FERC, have witnessed notable relative price increases since the introduction of restructuring. It is important to note that this analysis provides a look at price trends and does not provide any indication of RTOs causing these trends or even influencing them. Notably, both the original RTO states and Texas are highly reliant on natural gas, the prices of which have increased dramatically in recent years. Restructuring and competition in New England resulted in relatively small savings in the capital and operating costs of wholesale electricity. No specific analysis of the impact of wholesale cost savings on consumer prices. Sponsored by an electricity- generating company. Estimated that restructuring and competition resulted in an expected 2 percent savings in wholesale electricity costs for New England from 2002 to 2018. Net benefits estimate based on comparing model simulations of capital and operating costs of the restructured electric industry in New England with simulations of investments and operating costs in a “counterfactual” case with more traditional regulation and without industry restructuring. Attributed very significant benefits to greater nuclear plant efficiency from restructuring and competition. Restructuring has been beneficial to companies that restructured, but the evidence regarding the impact of RTOs on consumers is far less clear. Constructed an economic and statistical model to study the impact of various elements of retail and wholesale restructuring on the price-cost markup of electricity-generating companies. Asserted that restructuring was beneficial to companies that restructured, based on the conclusion that 2 to 3 cents per kilowatt-hour of the difference between prices and costs was explained by restructuring rather than increases in fuel prices. The study finds no evidence that RTO formation or industry restructuring explains price differences among regions of the country. Compared actual average retail industrial electricity prices with model-predicted prices in states classified as restructured and nonrestructured in 2001-2003. Concluded that prices were lower than predicted in two- thirds of restructured states and in about one-quarter of nonrestructured states. Concluded also that regulatory reform at neither the retail nor wholesale levels (RTO participation) was a significant driver of the difference in price trends. Consumers in the Eastern Interconnect region (entire United States except 11 Western states and Texas) benefited from large savings in the cost of utility wholesale purchases of electric power. Commissioned by private energy companies. Concluded that wholesale competition in the electricity industry in the Eastern Interconnect region resulted in large net economic benefits and that RTOs contributed significantly to the realization of these benefits. Used a computer model to simulate wholesale electricity production costs for 1999-2003 under two scenarios: simulating (1) actual restructuring events over 1999-2003 and (2) the absence of procompetitive FERC reform over the same period. Concluded that procompetitive reforms resulted in about $15 billion net savings. Savings largely driven by dramatically improved efficiencies of power plants. Also specifically estimated large net economic benefits from expansion of the PJM Interconnect in 2004, supporting the conclusion that RTO formation and operations played an important role in realizing the benefits of competition. Average retail prices are slightly lower per megawatt hour for PJM and New York ISO residential consumers than if coordinated markets had not been implemented. Commissioned by PJM. Used several statistical economic models to isolate the impact of electricity restructuring from several other variables that affect electricity prices. All model specifications indicated somewhat lower prices associated with restructuring. Concluded that while current RTO markets are imperfect, they have provided material benefits to consumers. LMP markets in RTOs have not delivered benefits to consumers in ISO New England and PJM; resource owners have reaped windfall profits. Commissioned by the American Public Power Association. Concluded that location-based pricing of RTO markets like PJM and ISO New England represented the best approach available for operating large, interconnected power pools efficiently and reliably. Also concluded that the benefits of this form of pricing have been limited because markets are based on bids rather than costs and lack perfect competition. Further, this pricing mechanism in the PJM and ISO New England markets resulted in windfall profits for resource owners without benefits to consumers. Found no evidence of this form of pricing improving the pattern of investments in the industry. Large savings in wholesale electricity costs in New England and in ratepayers’ bills, and other benefits including service reliability, lower emissions, and greater demand response. Summarized unpublished ISO New England analyses that estimated RTO benefits in different aspects of electricity service in New England. Estimated average annual wholesale market savings of about $850 million from 2000 to 2006, equivalent to an approximate net monthly savings of $4 for the average New England ratepayer. Quantified other RTO benefits, such as lower emissions of certain pollutants. Concluded that ISO New England had a significant role in enhancing the reliability and efficiency of the region’s electricity industry and can help achieve the region’s environmental goals by enabling the interconnection of low- carbon-emitting resources, benefit the region’s electricity consumers, improve planning, and more. Lower prices for residential and industrial consumers. Constructed an economic and statistical model to study the effects of retail and wholesale competition on electricity prices for residential and industrial consumers, using the share of electricity generated by unregulated generators in a state as a proxy measure for the effect of wholesale restructuring. Concluded that greater activity in a state’s wholesale electricity market is associated with lower prices for residential and industrial consumers, supporting the study’s view that RTOs improved industry performance. Found no reliable or convincing evidence that consumers are better off as a result of restructuring the U.S. electric power industry. No data analysis conducted (review of other studies) Commissioned by the American Public Power Association, reviewed 12 studies on the economic impact of restructuring in the U.S. electricity industry. Identified serious weaknesses in all 12, concluding that the methodologies consistently fell short of the standards for good economic research. Most also failed to fully address the effects of restructuring. Large net economic benefits in the Midwest ISO region in various aspects of electricity services; no specific analysis of how benefits affect consumer prices. Size, duration, cost, and probability of electricity outages; measures of the use of electricity generation capacity and of the cost of reserve generation capacity; RTO administrative and operating costs; etc. Summarized Midwest ISO and consulting firm studies that used different approaches to estimating the economic impact of Midwest ISO operations in several areas. Concluded that $555 million to $850 million in annual net economic benefits for the region resulted from more efficient use of the industry’s resources (generation and transmission assets), more reliable service, and improved planning and investment patterns. Pointed to unquantified benefits related to greater price transparency, regulatory compliance, and improved opportunities for demand response and renewable resources. No conclusions on whether RTOs yielded net economic benefits or whether retail consumers were benefiting from RTOs. Prepared for the National Rural Electric Cooperative Association and intended to provide insight into RTO performance in various areas. Stated that many industry stakeholders were concerned that no single reference document was available for RTO statistics to objectively analyze RTO and RTO market performance. Consolidated data from different sources to make performance comparisons across RTOs. Mentioned areas of strength of individual RTOs and expressed concern, particularly about market power, demand response, and investments. Restructuring electricity markets at least so far has resulted in no discernible benefits to consumers of electricity. Commissioned by the Virginia State Corporation Commission. Addressed retail and wholesale restructuring. Recognized that RTOs’ “marginal cost” pricing is needed for an efficient market under competitive conditions, but expressed concern that RTO markets were not sufficiently competitive because consumers had very limited ability to respond to high prices by reducing demand and because of evidence of market power on the supply side. Restructuring and competition resulted in significant reductions in the prices consumers pay for electricity. Used a comparison of prices for 1997 and 2002, assuming that prices were lower in 2002 due to a large extent to restructuring. Estimated that PJM electricity consumers saved about $3.2 billion in 2002 from restructuring, equivalent to about 15 percent of their electricity bills that year. For comparison, the 2007 average retail price of electricity was about 9 cents per kilowatt-hour (see fig. 9). Blumsack, Lave, and Apt, Electricity Prices (2008), p. 24: “Overall, simply joining an RTO has had little effect on price-cost markups, although the combination of RTO membership and retail competition appears to dampen the increase in price-cost margins.” In addition to the individual above Jon Ludwigson, Assistant Director; Pedro Almoguera; Dan Egan; Philip Farah; N’Kenge Gibson; Paige Gilbreath; Randy Jones; Jennifer Leone; Ying Long; Alison O’Neill; Glenn Slocum; Barbara Timmerman; Walter Vance; and George Warnock provided significant contributions.
In 1999, as a part of federal efforts to restructure the electricity industry, the Federal Energy Regulatory Commission (FERC) began encouraging the voluntary formation of Regional Transmission Organizations (RTO)--independent entities to manage regional networks of electric transmission lines. FERC oversees six RTOs that cover part or all of 35 states and D.C. and serve over half of U.S. electricity demand. As electricity prices increase, stakeholders-- organizations and individuals with financial and regulatory interest in the electricity industry--have voiced concerns about RTO benefits and how RTO expenses and decisions influence electricity prices. GAO was asked to review (1) RTO expenses and key investments in property, plant, and equipment from 2002 to 2006, the most current data available; (2) how RTOs and FERC review RTO expenses and decisions that may affect electricity prices; and (3) the extent to which there is consensus about RTO benefits. To do so, GAO reviewed documentation and data and spoke with FERC officials and experts. RTO expenses and investments in property, plant, and equipment vary, depending on the size of the RTO and its functions. Expenses for the six RTOs FERC oversees totaled $4.8 billion from 2002 to 2006, and property, plant, and equipment investments totaled $1.6 billion as of December 2006. RTOs and FERC rely on stakeholder participation to identify and resolve concerns about RTO expenses and decisions that affect electricity prices, such as decisions about reliability and whether to develop markets for electricity and other services. The stakeholders GAO spoke with in two RTO regions value the opportunity for input but have concerns about the resources and information required to participate. Moreover, although regular review of RTO budgets could help FERC with its responsibility to ensure RTO rates remain just and reasonable or determine if a new rate proceeding is needed, FERC's review of RTO budgets varies. Furthermore, while FERC requires RTOs to report actual expenses annually, it does not regularly review this information for accuracy or reasonableness and is at risk of using and providing to the public inaccurate and incomplete information. FERC officials, industry participants, and experts lack consensus on whether RTOs have brought benefits to their regions. Many agree that RTOs have improved the management of the transmission grid and improved generator access to it; however, there is no consensus about whether RTO markets provide benefits to consumers or how they have influenced consumer electricity prices. FERC officials believe RTOs have resulted in benefits; however, FERC has not conducted an empirical analysis of RTO performance or developed a comprehensive set of publicly available, standardized measures to evaluate such performance. Without such measures, FERC will remain unable to demonstrate the extent to which RTOs provide consumers and others with benefits--information that could aid FERC in its evaluation of its decision to encourage the creation of RTOs and help address divisions about which benefits RTOs have provided.
Broadband speeds are described in upload and download capabilities measured by the number of bits of data transferred per second and include kilobits (1 thousand bits per second), megabits (1 million bits per second), and gigabits (1 billion bits per second). Download speed refers to the speed at which data is transferred from the Internet to the consumer. Upload speed refers to the speed at which data is transferred from the consumer to the Internet. FCC currently considers speeds of 4 megabits per second (Mbps) download and 1 Mbps upload or greater to be broadband. The speeds required by small businesses vary depending on how the business uses its Internet connection, the number of users, and the number of applications running concurrently, among other factors. Examples of uses supported by different download speeds are described in table 1. Broadband service is provided through a variety of technologies, including: Digital subscriber line (DSL). This service is delivered by local telephone companies over their copper-wire telephone networks used by traditional voice service. Cable modem. This service is delivered by cable operators through the same coaxial cables that deliver sound and pictures to television sets. Fiber optic. Fiber optic technology converts electrical signals carrying data to light and sends the light through transparent glass fibers about the diameter of a human hair. Satellite. This wireless service transmits data to and from subscribers through a receiver dish to a satellite in a fixed position above the equator, eliminating the need for a copper wire or coaxial cable connection. Wireless. Land-mobile or terrestrial broadband service that connects a business or home to the Internet using a radio link. Broadband access can be shared or dedicated. Shared access means users share the connection to the Internet, and thus speeds can be variable based on the number of users accessing the network at one time. Dedicated access provides a reliable point-to-point connection with guaranteed speeds. For some small businesses with the need to send sensitive or large amounts of data, such as financial institutions or medical centers, a dedicated connection or special access line may be beneficial. However, most small businesses do not need such a connection. Broadband Internet access is widely available throughout the United States to both residences and businesses. According to the National Broadband Map, which measures national access to broadband, as of December 2012, approximately 98 percent of the U.S. population had access to wireline or wireless broadband service of 3 Mbps download and 768 kilobits per second (kbps) upload.Office of Advocacy in 2010 similarly found that over 96 percent of urban A study completed for SBA’s small businesses and approximately 92 percent of rural small businesses reported access to wireline or wireless broadband. Still, some areas of the United States remain underserved or unserved by broadband infrastructure. areas. For example, according to data used in the National Broadband Map, nearly 100 percent of urban residents have access to 3 Mbps or higher download speeds, and about 94 percent of nonurban residents have access to such speeds. Likewise, wireline broadband access is available to 99 percent of urban populations and 82 percent of nonurban populations. The unserved and underserved areas that remain in the United States tend to be where conditions increase the cost of broadband deployment, and the difficulty in recouping deployment costs makes it less likely that a service provider will build out or maintain a network. These conditions include: Service gaps exist primarily in nonurban Low population. The limited number of potential subscribers in an area makes it difficult for providers to recoup the costs of building a network. The definitions of unserved and underserved were part of a Notice of Funds Availability announced by NTIA and designed to implement grant programs under the American Recovery and Reinvestment Act of 2009 (74 Fed. Reg. 33104, July 9, 2009). The speeds in these definitions are much lower than the FCC’s broadband benchmark of 4 Mbps download and 1 Mbps upload. According to the Notice of Funds Availability, an unserved area is one in which at least 90 percent of households cannot subscribe to the minimum broadband speed and service, defined as advertised speeds of at least 768 Kbps download and at least 200 Kpbs upload. An underserved area is one in which (1) 50 percent or less of households have access to the minimum broadband speed, (2) no provider offers service speeds of at least 3 Mbps, or (3) 40 percent or less of the households choose to subscribe to a broadband service. The availability of, or adoption rates for, satellite broadband service are not considered in determining whether an area is unserved or underserved. Difficult terrain. Challenging terrain, such as mountains, may increase construction costs for wireline service and can affect wireless service by creating physical barriers or otherwise limiting the ability to transmit data. Natural Disasters. Areas that experience severe weather or natural disasters may lose broadband access temporarily, which increases costs because of the need to repeatedly repair or replace infrastructure. For instance, Hurricane Sandy took down a service provider’s copper lines that provided DSL service for Fire Island, New York. The service provider decided to replace the copper lines with a more costly but resilient fiber network. SBA does not provide funding for broadband deployment. However, it does provide funding to nonprofit Small Business Development Centers (SBDC). SBDCs provide training and education to encourage greater use of broadband. SBA also supports research by its Office of Advocacy on the use and availability of broadband. Congress appropriated funds for the BTOP and BIP programs under the American Recovery and Reinvestment Act of 2009 (Pub. L. No. 111-5, 123 Stat. 115 (2009)). All funds were obligated prior to the end of fiscal year 2010. In the joint Notice of Funds Availability, NTIA and RUS provided that projects should be completed within 3 years of receiving an award. 74 Fed. Reg. 33104, July 9, 2009. As part of BTOP, in support of broadband adoption, NTIA awarded grants to public computing centers and sustainable broadband adoption projects that funded access to broadband, computer equipment, and job training. The Connect America Fund is part of ongoing Universal Service Fund reform aimed at eventually replacing existing high-cost support mechanisms. The high-cost program within the Universal Service Fund (USF) provides subsidies to telecommunications carriers that serve rural and other remote areas with high costs of providing telephone service. GAO has ongoing work on the USF reforms and their impact on broadband deployment and other issues. Some municipalities also support broadband deployment by funding, building, and operating networks to provide broadband access to their communities, much as some cities offer utilities such as water and electricity. The municipal entity providing this service may be, for example, a department within the city government, or a cooperative formed among several communities. Communities have used federal funds, issued bonds, and taken out loans to fund the construction of municipal broadband networks. In some instances, voter referendums have been required for the city to take out loans or bonds for this purpose. Municipal networks have achieved varying degrees of public acceptance and financial success. In some communities, these networks have been welcomed because they are the only broadband service provider. In other communities, the municipality functions as a competitor to cable and DSL providers and lawsuits have been filed by incumbent service providers to prevent municipalities from building networks. Some states have passed legislation to prevent communities from becoming service providers. Nevada, for instance, prohibits cities with a population of 25,000 or more from selling telecommunications services to the general public. Nebraska prohibits any political subdivision that is not a public power supplier from providing broadband or Internet services. Financially, some municipal networks have been successful while others have struggled to pay off bonds or loans used for capital investment. Federal broadband programs do not target deployment to small businesses. As previously discussed, federal programs target deployment to areas that are unserved or underserved. Many programs do, however, have requirements that can result in networks maximizing the number of small businesses and residences served. For example, USDA’s Community Connect grants require that the service provider offer broadband services to all residences and businesses in the proposed service area. To be eligible for the Rural Broadband Access Loan and Loan Guarantee Program at least 25 percent of the households in an area must currently be underserved. Thus, the program’s funding supports providers who will serve residences and small businesses in areas of need. Table 3 shows selected federal funding requirements related to eligibility and infrastructure deployment for the six federal programs previously described. Since these programs do not focus on deployment to small businesses, they do not measure their impact on small businesses, including the broadband speeds and prices available to them. However, each program has broader goals and measures, some of which encompass the impact on businesses. For example, BIP supports USDA’s goal to increase the number of rural Americans with access to broadband service and provide the speeds needed by business, health care, public safety, and others. Consistent with this goal, RUS reported in August 2013 that more than 5,800 businesses had received new or improved service as a result of BIP funding since passage of the Recovery Act in 2009, even though BIP does not have specific performance targets regarding services to businesses. BTOP supports NTIA’s strategic goal of driving innovation through policies that enable broadband growth and support e-commerce. Accordingly, NTIA measures the number of community anchor institutions, such as schools and libraries, that received broadband connections through BTOP and the miles of broadband network deployed. NTIA also collected data on interconnection agreements that allow small internet service providers to provide broadband service. According to service providers we spoke with, federal funding was instrumental in their network expansion or upgrade. For example, officials from Monroe Telephone Company in Oregon stated that without the federal support they received through BIP, they would not have expanded their network due to the area’s low population density and mountainous terrain. Monroe officials stated that they used the loan of $1.4 million and grant of $4.2 million, both from BIP, to expand broadband access to 1,200 households and small businesses in two counties that previously only had dial-up or satellite service. In another example, officials at Paul Bunyan Telephone Cooperative in Minnesota stated that the RUS loan they received enabled them to expand their broadband service years earlier than otherwise would have been possible. Federal programs have supported improvements to broadband networks through grants and loans for expansions, upgrades, and building of new networks, according to the service providers we spoke with. Providers expanded their existing networks by laying new fiber optic lines or using other technologies to make broadband available in areas that were previously unserved or underserved. For example, Intermountain Cable in eastern Kentucky used a Community Connect grant to expand its broadband network to Hurley, Virginia. According to officials at Intermountain Cable, Hurley previously only had satellite broadband service. SandyNet, a municipal broadband provider in Sandy, Oregon, used BIP funding to build fiber optic lines, allowing SandyNet to expand its wireless service further into rural areas. Providers also used federal funds to upgrade and improve the reliability and speed of their existing networks. For example, in northwest Minnesota, Garden Valley Telephone Company used an RUS Telecommunications Infrastructure loan to upgrade the copper lines in the rural areas it serves with fiber optic lines, which provide a faster and more reliable connection. For homes and small businesses in these areas, speeds have gone from approximately 1 Mbps download to a top advertised speed of 30 Mbps. In other areas it serves, Garden Valley used portions of the loan to make smaller scale improvements, loan to make smaller scale improvements, changing some of the hardware attached to existing copper lines to increase speeds. Finally, federal funds or, in the cases of some communities, other sources of funding such as municipal bonds, have been used to build new broadband networks. The North Georgia Network used a $33 million BTOP grant to build a 260-mile fiber optic network that provided broadband to businesses and residences. MiNet, a municipal network operated by the cities of Monmouth and Independence, Oregon, used city funds and a loan from the state of Oregon to build a fiber optic network that provides download speeds of up to 1 Gbps. According to some providers, these federal and municipal investments have stimulated competition. In some areas that received federal funds or where a municipal network was built, other broadband providers took steps to improve the speed and reliability of their service. For example, following the construction of a fiber-to-the-home municipal network in Monticello, Minnesota, the two other broadband providers in the area made investments in their infrastructure to improve their broadband speeds. One of these providers stated that all of its networks undergo periodic upgrades to improve service, but upgrade schedules can change in order to stay competitive when there is a new service provider in a particular market. We found that more service providers in funded communities offered service at higher speed ranges than providers in comparison communities, as shown in figure 1. For example, twice as many funded communities as comparison communities have a provider that offers speeds of 51 Mbps or higher. However, among the 14 funded communities and 14 comparison communities included in our analysis, all have at least one service provider that offers download speeds of at least 4 Mbps, which is FCC’s current benchmark for broadband. We also compared the highest speed offered by service providers. Our analysis found that federally funded and municipal networks most often had the highest advertised top speed when compared with top speeds offered by nonfederally funded and non-municipal networks in the same community, and networks in nearby comparison communities. In 9 of the 14 sets of communities included in this analysis, federally funded or municipal networks had the highest advertised top speeds, as shown in figure 2. For example, a federally funded network in a community in northeast Georgia advertised a top download speed of 100 Mbps, while the highest speed advertised by other providers in the same community and in the nearby comparison community was 40 Mbps. In the five other cases, networks in nonfederally funded communities offered speeds that were equal to or higher than speeds available in funded communities. We found that prices offered by federally funded and municipal networks were slightly lower than prices offered by nonfederally funded networks in the same community and networks in comparison communities. For example, for speeds of 4 to 6 Mbps, federally funded and municipally operated networks charged prices that were on average about $11 per month less than nonfederally funded networks in the same community and about $20 less per month than networks in comparison communities. The price differences are greater in the 7 to 10 Mbps download range, where federally funded and municipally operated networks’ prices were on average about $30 less per month than nonfederally funded networks in the same community and about $35 less per month than networks in comparison communities. There were some cases where federally funded or municipal networks offered substantially lower prices than networks in comparison towns, such as the municipal network in Windom, Minnesota, which offered 10 Mbps download service for approximately $38 a month, while two networks in a comparison town offered the same speed for about $100 to $110 per month. Figure 3 illustrates the prices for selected speed ranges offered by all the providers included in our analysis and is broken out by federally funded and municipal networks, nonfederally funded networks in the same community as a federally funded or municipal network, and networks in comparison communities. As this figure shows, prices in all the ranges are generally lower for federally funded or municipal networks, and at the 4 to 6 Mbps and 7 to 10 Mbps download ranges, networks in comparison communities tend to have higher prices than both federally funded and municipal networks and nonfederally funded networks located in the same community. We also compared broadband speeds and prices in the nonurban funded and comparison communities with the speeds and prices in urban areas. For download speeds below 10 Mbps, average prices in nonurban areas were lower than average prices in urban areas. For example, for speeds of 4 to 6 Mbps, the average price was about $23 less in nonurban areas than in urban areas, and for speeds of 7 to 10 Mbps the average price was about $9 less. The lower prices offered by networks in nonurban communities could be due to providers having lower costs to recoup because some receive federal or municipal support or due to the limits imposed by weak market demand, typical of many nonurban areas. In this analysis it is difficult to identify the exact reason for the lower prices in the nonurban communities. For speeds of 11 to 25 Mbps, urban areas offered prices that were on average $21 less than nonurban areas. Furthermore, nonurban communities with federally funded or municipal networks tended to have lower prices than nonurban comparison communities and urban communities. For example, in the 4 to 6 Mbps download range, nonurban networks in funded communities offered average prices that were about $16 less than the prices offered by networks in comparison communities and $32 less than networks in urban communities, and $20 and $19 less, respectively, in the 7 to 10 Mbps speed range. In the 11 to 25 Mbps range, nonurban funded communities offered lower average prices than comparison communities, but urban areas offered lower average prices than both the nonurban funded and comparison communities. Figure 4 illustrates the prices that service providers offer for selected speed ranges in urban and nonurban areas. We found that providers in urban areas generally offer higher speeds than those in nonurban areas. Among the locations included in our analysis, providers in all 8 of the urban areas offered download speeds of 100 Mbps or higher, whereas providers in only 7 of the 28 nonurban areas offered download speeds of 100 Mbps or higher. Six of the 7 nonurban areas were funded communities with speeds of 100 Mbps provided by a federally funded or municipal network. However, in one funded community one competitor also offered speeds of 100 Mbps. Cities of Monmouth and Independence, Oregon, MINET Fiber Network MINET is a fiber optic network in Monmouth and Independence, Oregon. The two cities decided in 2004 to build their own fiber network for economic development purposes. City officials believed that availability of broadband services would help to keep existing jobs and attract new employers. The fiber network was built with city funds and a state loan. MINET passes 6,400 businesses and homes at the curb. In June 2013, MINET’s customers included about 500 local, mostly small businesses, according to MINET officials, a number that comprises about 90 percent of local businesses. Most businesses subscribe to download speeds of 7 Mbps or 10 Mbps, although MINET can offer download speeds up to 1 Gbps. Minnesota, a farm equipment sales and service company reported that it switched from its previous broadband provider because the provider’s service could not supply the desired broadband speeds. Now the farm equipment company has broadband speeds nearly twenty times its previous speeds. Similarly, 18 of 27 small businesses we spoke with told us that their new service is more reliable than the service of their previous provider. These small businesses said they experienced less network downtime and no significant slowdowns in speed at points in the day when usage increased. Many service providers told us that they used fiber optics for their expansions or upgrades, contributing to greater reliability and speed. While reliability and speed were reported as improving, small businesses we spoke to reported that the effect of the new network on price varied. Several reported the price of broadband service went down, particularly a few businesses that previously relied on satellite service for broadband. However, some small businesses we spoke with reported that the price for the new service was similar to or more than their previous service. For example, an information technology company in northeast Georgia told us that it pays approximately $20 more per month but stated it was worth the additional cost because of the increased reliability and additional speed. Small business owners we met with who use the services of federally funded or municipal networks told us that they made improvements to their business operations, often because the speed of online applications was improved, which allowed them to operate more efficiently. Table 4 describes some of the improvements that small businesses told us they experienced due to the enhancements to their broadband service. Small business owners we spoke with said that the operational efficiencies they experienced as a result of better broadband service have not yet resulted in increased revenues. Only one small business we met with sought to improve their revenue potential by relocating to an area with better Internet service. Rather than relocate, a different small business owner stated he would pay more to get a dedicated line for faster or more reliable service. Other small businesses stated that broadband service would not alone determine where they set up their business but might be one of many factors considered. Similarly, the communities that built high speed broadband networks did so to attract some new businesses, as well as retain existing businesses. We provided a draft of this report to NTIA and EDA within the Department of Commerce, USDA, SBA, and FCC for review and comment. NTIA and FCC provided technical comments, which were incorporated, as appropriate. The other agencies reviewed the draft but had no comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of his letter. At that time, we will send copies of this report to the Secretary of Commerce, the Secretary of Agriculture, the Administrator of the U.S. Small Business Administration, and the Chairman of the Federal Communications Commission. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any question, please contact me at (202) 512- 2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix III. This report describes: (1) the federal government’s efforts to ensure the availability of broadband services for small businesses, and (2) the effect of federally funded and municipal networks on broadband service and small businesses. To address both reporting objectives, we reviewed documents from and interviewed officials at the U.S. Department of Agriculture’s (USDA) Rural Utilities Service (RUS), the Department of Commerce’s (Commerce) National Telecommunications and Information Administration (NTIA) and Economic Development Administration (EDA), and the Federal Communications Commission (FCC) about their efforts to ensure the availability of broadband services for small businesses. We also reviewed documents and interviewed officials at the Office of Advocacy within the Small Business Administration (SBA) about its research on the availability and use of broadband by small businesses. We reviewed program rules regarding funding applicability and eligibility for FCC, RUS, NTIA, and EDA programs that provide funding for broadband infrastructure; status reports for the Broadband Technology Opportunities Program (BTOP) and the Broadband Infrastructure Program (BIP); SBA’s Office of Advocacy’s study on small business access to broadband; and reports from FCC and NTIA on broadband deployment and availability. We also reviewed reports and surveys from academic institutions, think tanks, and trade associations on the topics of broadband deployment, economic development, and small business. We interviewed representatives of a telecommunications trade association, representatives of small business interests, and large and small broadband service providers in an effort to obtain a variety of viewpoints on issues related to small business and broadband services. To identify the strategic objectives, goals, and performance measures of federal broadband infrastructure programs, we reviewed budget summaries and performance plans, performance and accountability reports, and other agency documents for these programs from USDA, NTIA, RUS, and FCC. To describe the effect of federally funded and municipal networks on broadband service and small businesses, we obtained and analyzed information from a variety of sources. We visited towns in Oregon, Minnesota, Tennessee, and Georgia, where we interviewed a nongeneralizable selection of Internet service providers that received federal funds, municipally operated network providers, and small businesses that use the services of these providers. We selected these states and the specific locations within the states on the basis of the presence of at least one project that received federal funding for broadband infrastructure in the last 5 years; the presence of a municipally operated broadband network, which also may have received federal funding; and geographic diversity, i.e., sites were in different regions of the county. The locations were chosen to collectively include at least one project from each of the major federal broadband infrastructure programs. We selected small businesses to interview that were users of the federally funded or municipal networks and that had fewer than 500 employees, based on FCC’s National Broadband Plan, which addresses support for broadband growth in small and medium enterprises of this size. While the results of our interviews cannot be projected to all service providers and small businesses because they were selected using a nonprobability approach, they illustrate a range of possible views and experiences. We collected information on broadband speeds and prices offered by all wireline providers in the locations we visited where federally funded or municipal networks were present. We only included wireline broadband service in our analysis because unlike some wireless service (e.g., satellite and mobile broadband), wireline broadband generally offers higher speeds and greater reliability that businesses require. For comparison purposes, we also collected speed and pricing information for all wireline providers in nearby towns that were similar to these locations in terms of population, income levels, and number of wireline service providers, but where federally funded or municipal networks were not present; and in two urban areas in each of the same states. Table 5 lists the locations visited, the nearby towns, and the urban areas visited in each state. In total we collected pricing information on fourteen nonurban towns that received federal funding or have a municipal network, 14 nonurban comparison towns, and 8 urban areas. We used the National Broadband Map, a joint effort of NTIA and FCC to analyze and map broadband speeds, and comparable efforts managed by the states to identify wireline service providers in these locations. For each service provider identified, we collected information on advertised download and upload speeds offered to small businesses and the monthly rate charged. We collected unbundled, month-to-month pricing when available. Some service providers required a customer to have a telephone line and some required a contract ranging from 2 months to 2 years. If the service provider did not provide separate pricing for small businesses, we collected residential speed and pricing information. We obtained this information from service providers’ websites or if not available online by calling the company directly. We requested speed and pricing for each city and town in the sample—either by the town’s name or by a specific address if the service provider required one. We analyzed the information collected to identify difference in speeds and prices between the locations with federally funded or municipal networks and similar towns without such networks, as well as between urban and nonurban locations. Because this information is drawn from a nonprobability sample, it cannot be generalized to all locations with federally funded or municipal networks, all urban locations, or all nonurban locations. We conducted this performance audit from February 2013 to February 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings based on our audit objectives. Appendix II: Speeds and Prices Offered by Federally Funded and Municipal Broadband Service Providers That Were Part of GAO’s Analysis, as of September 30, 2013 Top available advertised speed (Mbps) Most common speed subscribed to by businesses (Mbps) Information for broadband speed and pricing is reported here only for the Georgia Communications Cooperative, a member of the North Georgia Network. Other cooperatives belonging to the North Georgia Network may offer different speeds and prices. In addition to the individual named above, Heather Halliwell, Assistant Director; Namita Bhatia Sabharwal; Sharon Dyer; Laura Erion; Eric Hudson; Dave Hooper; Josh Ormond; Amy Rosewarne; and Andrew Stavisky made key contributions to this report.
Increasingly, small businesses rely on Internet-based applications to improve efficiencies and expand market access. Although broadband Internet access is widely available to businesses, areas of the country remain that still have little or no access. Since 2008, federal programs have provided over $15 billion in funding to help deploy broadband to these areas. Additionally, some municipal governments have begun to build and operate networks to provide broadband access to their communities. GAO was asked to describe issues related to broadband availability for small businesses. This report addresses (1) the federal government's efforts to ensure the availability of broadband services for small businesses, and (2) the effect of selected federally funded and municipal networks on broadband service and small businesses. GAO reviewed documents and interviewed officials from five federal agencies that support broadband deployment and research on broadband availability. GAO interviewed service providers that received federal funding, municipal network operators, and small businesses in four states, and collected speeds and prices for broadband services in selected communities in these states. The states, communities, and businesses were selected based on the presence and use of a federally funded or municipal network. GAO is not making any recommendations. In commenting on this report, the agencies provided technical comments, which GAO incorporated as appropriate. Federally funded programs to expand broadband access encompass but do not specifically target small businesses. These programs—the Broadband Initiatives Program (BIP), Broadband Technologies Opportunities Program, Community Connect Grants, Connect America Fund, Rural Broadband Access Loan and Loan Guarantee Program, and Telecommunications Infrastructure Loan Program—have eligibility requirements based on the need of an area, as well as deployment requirements that can maximize the number of businesses served. For example, the Community Connect grants require providers to serve all businesses and residences in deployment areas. Since these federal programs do not target deployment to small businesses, they do not measure the impact on small businesses. However, BIP has a specific goal to increase access to rural Americans and provide broadband speeds to businesses, and in August 2013, the United States Department of Agriculture reported BIP's funding had resulted in over 5,800 businesses' receiving new or improved broadband service since 2009. Other programs have broader goals and measures related to the program's purpose, such as serving schools and libraries. Improvements to broadband service have resulted from federal funding and the existence of municipally operated networks. Service providers have used federal funding for expansions and upgrades, such as building out to previously unserved areas and replacing old copper lines with fiber optic cable, resulting in faster and more reliable broadband connections. GAO examined broadband services for 14 federally funded and municipal networks and found they tended to have higher speeds than other networks. For example, in 9 of the 14 communities where GAO collected information on broadband speeds and prices, federally funded or municipal networks offered higher top speeds than other networks in the same community and networks in nearby communities. Additionally, prices charged by federally funded and municipal networks were slightly lower than the comparison networks' prices for similar speeds. Prices for lower to mid-range speed tiers available from federally funded and municipal networks in nonurban areas also compared favorably to prices in urban areas in the same state. However, providers in urban areas were more likely than those in nonurban areas to offer higher speeds. According to small businesses GAO met with, the speed and reliability of their broadband service improved after they began using federally funded or municipal networks. Furthermore, according to small business owners, the improvements to broadband service have helped the businesses improve efficiency and streamline operations. Small businesses that use the services of these networks reported a greater ability to use bandwidth-intensive applications for inventory management, videoconferencing, and teleworking, among other things.
Budgetary constraints and increasing demands to improve service have increased focus on the importance of federal agencies making wise and efficient use of resources to accomplish their missions. Some of these decisions require balancing short-term demands to fund day-to-day operations with needs to acquire assets that yield benefits over the long term. Spending for some assets may be necessary to produce program efficiencies and cost savings over the long-term. Some budget observers believe, however, that a bias is created against spending for long-term capital assets because of the requirement that the entire cost of these relatively expensive assets be budgeted for in an agency’s or program’s annual budget or “up-front” rather than spread over the life of the assets.These concerns have led some to suggest that the federal government adopt a capital budget to spread the cost of long-lived assets across their useful lives. However, capital budgeting proposals have raised concerns among budget experts about fiscal control and accountability. This report responds to a request by Representative William F. Clinger Jr., Chairman of the Committee on Government Reform and Oversight, to examine issues federal agencies face in planning and budgeting for the acquisition of capital assets. It also assesses ways that some federal organizations have developed to address those concerns and that could be used by other agencies within the existing budget structure. For the purposes of this study, the terms “capital assets” and “fixed assets” are used interchangeably and are defined as tangible assets that are owned by the federal government and that are primarily used in the delivery of federal services. These types of assets are normally available in the commercial market and include buildings, equipment, and information technology. Capital asset acquisition may take the form of rehabilitation of existing assets or development and construction of new ones. The primary focus of this report is on the capital planning and budgeting experiences of five case study organizations represented by four agencies: the Army Corps of Engineers, the Coast Guard, the General Services Administration’s (GSA) Interagency Fleet Management System (IFMS) and the Public Buildings Service (PBS), and the U.S. Geological Survey (USGS). Budgetary constraints have long had an influence on federal decision-making. Since 1970, the federal government’s spending has consistently exceeded its income, resulting in pressure to restrain spending. Discretionary spending or the portion of the budget that lawmakers annually control through appropriations—which is the primary source for capital spending—has dropped from 12.2 percent of gross domestic product (GDP) in 1970 to 7.8 percent in 1995. In dealing with a shrinking resource base, it is inevitable that some agency missions may be curtailed, and some assets may not be, nor need to be, replaced. Thus, a decision not to fund a particular capital asset may reflect the outcome of competition with other capital projects and other types of expenditures as much as it does any characteristics of the budget process. Distinguishing between obstacles which are rooted in overall resource constraints and those which are an outflow of budget practices and rules is a difficult but critical task. Agencies have often pointed to the poor condition of their existing capital assets as evidence of the need for increased capital spending. Articles in the popular press and past GAO reports have discussed the poor condition of various federal fixed assets, including the Pentagon, National Park Service facilities, Forest Service facilities, and many financial and information systems throughout government. Moreover, spending on capital is often necessary to generate operational savings in the future. Some observers have been concerned that even as overall resources are limited, resources for capital assets are constrained even more because of the high initial cost of capital assets and what these observers believe to be the short-term focus of the budget process. It is inevitable that resource constraints will prevent some worthwhile capital projects from being undertaken. However, decisions about whether any particular resource need—capital or operating—is funded reflect the priorities that are determined by the administration and the Congress. Ideally, those capital projects that are funded will be ones with the highest returns or that meet the highest priority mission needs. Therefore, the goal of the budget process should be to ensure neutrality vis-a-vis various types of spending so that decisions are guided by what is economically and programmatically justified rather than by what is recorded or “scored” most favorably in the budget. It is reasonable to expect that historical budget data would give some indication as to how spending on capital has changed over time. However, the federal government does not aggregate data on capital asset spending in the same way that we have defined it in this report—spending on assets used in agency operations. One reason for this is that federal budget data is intended to serve multiple purposes. For capital spending, the data collected are used to highlight the level of investment activity (character class data) and to record the nature of the assets procured (object class data). Nevertheless, OMB’s character class data, object class data, and program and financing data each provide some rough approximation for capital asset spending, and therefore, an approximate gauge of how such spending has fared over time. OMB asks agencies to code their net outlays each year according to various investment categories or character classes. Investment outlays are defined by OMB as spending that is intended primarily to yield benefits in the future—whether to the nation as a whole or to the government. Investments may be in the form of direct federal spending or grants to state and local governments, and may be for tangible or intangible assets. The OMB categories that we have used to most closely match our definition are those for direct spending on physical assets. However, the character class data will include some types of spending, such as for flood prevention and the acquisition of park land, which are excluded from our definition but cannot be easily segmented from the character class codes. OMB also requires that agencies classify their obligations by object of expenditure or object class. Object class schedules appear for each account in the President’s budget. The classifications for “Equipment” and “Land and Structures” are the closest approximation to our definition of capital assets, although they include some obligations which we exclude and omit others we would include. For example, some salaries and contractor costs that are devoted to capital projects are not included in these object class categories. Finally, agencies may also identify their obligations as “capital investments” in the program and financing schedules that appear for each account in the President’s budget. In these schedules, capital investments are acquisitions of physical or financial assets that yield benefits over several years. The program and financing classification capital investments is only shown when such investments are material for a program and represent nonreimbursable obligations. Agencies have discretion in defining programs, and consequently capital investments for this schedule. Therefore, some capital investments in the program and financing data may include items we would not consider capital and exclude others. Despite the limitations of the available data, a review of historical trends can provide some perspective on the magnitude and overall pattern of spending for capital assets. (See figures 1.1 through 1.3.) OMB character class data show that direct federal spending for “nondefense physical assets” in 1995 measured $19.5 billion and was about the same proportion of GDP and of total budgetary outlays as it was in 1970. Direct outlays for nondefense physical assets measured 0.26 percent of GDP in 1970, and in spite of ups and downs over the period, it represented about the same proportion in 1995. Likewise, as a percent of total budgetary outlays, direct spending for nondefense physical assets is basically unchanged from the 1970 level of 1.3 percent (although it did fluctuate over the period between 1.0 and 1.5 percent). Since these assets are primarily funded from the domestic discretionary category of spending, it may be insightful to compare trends against this portion of the budget. Here, too, we found that direct spending on nondefense physical assets is about identical to the proportion it was 25 years earlier (7.7 percent in fiscal year 1995 and 7.4 percent in fiscal year 1970). Historical budget data for our four case studies also show that spending on capital assets has not necessarily fared poorly relative to operations and programs. (Appendixes II through VI provide graphical analysis of agency trends.) Each case study experienced at least a modest increase in its overall budget in real terms between 1982 and 1995. For both GSA entities, capital obligations as a percent of total obligations have generally increased since 1982. For two other agencies, USGS and the Corps of Engineers, the proportion of obligations and outlays, respectively, made for capital assets over time has fluctuated up and down. In contrast, the Coast Guard has seen a steadily decreasing proportion of its outlays go toward capital assets. Caution is required in interpreting the significance of these trends. This is not solely due to the limitations noted above. Neither the overall federal data nor the case study trend data provide any indication as to whether the past levels of capital obligations or outlays were deficient, adequate, or excessive. Nor can the data indicate whether there is a bias in one direction or another. Trends could reflect changes in priorities between capital and other spending or changes in underlying needs for capital. Economies of scale in operations may suggest that in some cases operating expenses should decline relative to capital. In contrast, advances in technology may enable agencies to maintain consistent levels of operations while reducing their spending on capital assets. As agencies try to adopt more business-like practices, it is inevitable that comparisons are made with private-sector practices in budgeting for capital. Some observers have noted that when it comes to acquiring capital assets, businesses—unlike government agencies—are able to spread the expense of capital assets by depreciating their value in income statements over the estimated useful life. Budget practitioners rightly observe that because of the cash basis of the federal budget, there is a difference between the timing of the costs and benefits of capital assets. While the benefits of capital assets flow over time, federal budget rules require that their full cost be recognized in the budget when acquired. This has been equated to a business charging the full cost of capital assets to a single year’s income statement. Doing so would distort the true profitability of the firm in that year and make the cost of capital asset acquisitions appear artificially high. However, although the budget is occasionally called upon to serve the purpose of an income statement as well, it is not designed to measure profitability and is poorly suited for this role. In both the public and private sectors, budgets generally are a means through which organizations allocate resources. For many years there has been discussion of the federal government adopting separate capital and operating budgets. Under many such proposals, capital assets would be financed over time by borrowing—with depreciation charged each period to the operating budget (which under most proposals would be required to be balanced). Such proposals, however, fail to recognize key differences between budgeting and accounting. While depreciation is appropriate for helping companies measure profit or loss in financial statements, it is generally not used by companies in budgeting. They base capital spending decisions on present value comparisons of total cash inflows and outflows that are expected to result from alternative capital projects. Depreciation is not a cash flow and therefore affects a company’s capital spending decisions only to the extent that, as a tax deduction, it affects the amount of cash outflow for income tax. A company’s capital budget reflects the results of its spending decisions and records the cash requirements for its selected capital projects that are expected during each period. In this manner, a business’ capital budget has some similarity to the federal unified budget, which also records the cash requirements for capital projects during each year. If depreciation were recorded in the federal budget in place of cash requirements for capital spending, this would undermine Congress’ ability to control expenditures because only a small fraction of an asset’s cost would be included in the year when the decision was made to acquire it. The Antideficiency Act, as amended, implements Congress’ constitutional oversight of the executive branch’s expenditure of funds. The act reflects laws enacted by the Congress since 1870 to respond to abuses of budget authority and to gain more effective control over appropriations. The central provision of the act (31 U.S.C. 1341(a)(1)) prevents agencies from entering into obligations prior to an appropriation or from incurring obligations that exceed an appropriation, absent specific statutory authority. Thus, agencies may not enter into contracts that obligate the government to pay for goods or services unless there are sufficient funds available to cover their cost in full. Instead, agencies must budget for the full cost of contracts up-front. Also, the Adequacy of Appropriations Act (40 U.S.C.11), established in 1861, prohibits agencies from entering into a contract unless the contract is authorized by law or there is an appropriation to cover the cost of the contract. While these acts require that agencies have sufficient appropriated funds to cover their obligations, the Budget Enforcement Act of 1990 (BEA) created new mechanisms by which to limit federal spending overall. BEA formalized the distinction between direct and discretionary spending and provided separate controls for each. Discretionary spending is defined as budget authority provided in annual appropriations acts, while direct or mandatory spending is that which is provided by law other than annual appropriations acts. To control discretionary spending—including spending for fixed assets—BEA established strict dollar limits or “caps” on budget authority and outlays for each fiscal year through 1998. These caps are implemented through allocations to House and Senate appropriations committees, who subsequently allocate these totals among their subcommittees. The Congressional Budget Office (CBO) and OMB “score” or track budget authority, receipts, and outlays estimated to result from enacted legislation. Should a breach of the caps occur, BEA established a process called sequestration in which spending for most discretionary programs is reduced by a uniform percentage. As a result of BEA, scorekeeping guidelines, called scoring rules, were developed that significantly changed how certain types of contracts were scored in the budget. Previously, when an agency entered into a lease-purchase contract, budget authority and outlays were scored over the period of the lease in an amount equal to the annual payments. The new guidelines changed this by requiring that budget authority for lease-purchases be scored up-front and outlays be scored over the period during which the contractor constructs or purchases the asset. After BEA, a lease-purchase, which is tantamount to borrowing from the private sector, was no longer treated in the budget preferentially to borrowing by the Treasury to finance direct ownership. This effectively eliminated lease-purchases from consideration as a capital acquisition method that could be used to spread the cost of purchases over a period of years. The benefits to the government as a whole and the disadvantages to individual agencies resulting from the change in lease-purchase scoring are illustrative of the dichotomy that can exist between agencies’ and Congress’s perspective on the budget process. Changes in the scoring of lease-purchases, while problematic from the perspective of an individual agency because of up-front funding requirements and budget caps, are critical to enabling the Congress to control the total commitments made by agencies. Likewise, some ideas agencies propose to alleviate their perceived obstacles to capital spending may in turn create obstacles to maintaining fiscal control if implemented on a governmentwide basis. In this regard, there is a constant tension between agency and congressional perspectives on the nature of capital acquisition problems and their solutions. This report illustrates how a select group of federal organizations plan and budget for capital assets and the experiences they have had with the budget process. Five case studies were selected to include a broad range of characteristics—large and small organizations, operations-intensive and capital-intensive organizations, and organizations having a range of asset needs and account structures. While it is inappropriate to generalize about governmentwide practices in budgeting for capital from these case studies, it is possible to gain insight into some issues and discover potential strategies for addressing these issues. The information obtained from the case studies, supplemented by a limited number of interviews at other agencies that purchase capital assets, provides some indication of the range of issues that may be encountered governmentwide. Because agencies can differ substantially in their asset requirements, account structure, financial management history, and other characteristics, care must be taken in applying lessons from one agency to another. The chapters that follow include issues that generally affect all federal organizations, such as the requirement to fully fund capital acquisitions up-front, as well as issues that may be limited to selected organizations as a result of their particular characteristics. Likewise, any strategy that an agency has adopted to deal with its perceived obstacles to capital spending has been tailored for its specific circumstances. Some may be adaptable to other agencies; others may not be. The report is also not exhaustive with respect to the problems and strategies of case studies. Some financing strategies, such as budgeting for stand-alone stages of a larger capital project, may be used by case studies other than those explicitly mentioned in this report. Similarly, case studies may be using other financing approaches in addition to those cited. This report is not intended to represent a final or universal solution to the problems in budgeting for capital assets. Indeed, other issues would also need to be addressed if the capital acquisition process is to be improved. For example, the selection and evaluation of capital projects must be improved. GAO’s past work has identified a variety of federal capital projects including information technology as well as large-scale construction projects where acquisitions have yielded poor results—costing more than anticipated, falling behind schedule, and failing to meet mission needs. In addition, to effectively evaluate program performance as called for in the Government Performance and Results Act of 1993 (GPRA), agencies will need data on the full annual cost of programs including the cost of capital usage. The objectives of this study were to examine (1) how case study organizations perceive the budget process and structure affects their ability to acquire capital assets, (2) whether there are financing mechanisms currently used or proposed by our case studies that could be helpful in improving budgeting for capital assets within the current unified budget structure, and (3) the results of OMB’s Bulletin 94-08 on “Planning and Budgeting for the Acquisition of Fixed Assets.” To identify aspects of the budget process that affected case studies’ capital spending decisions and the financing mechanisms they used and proposed, we interviewed officials from our case studies as well as OMB and congressional staff responsible for reviewing the budgets of these organizations. To select our case studies, we used data from OMB’s MAXsystem to identify federal organizations making capital expenditures between fiscal years 1982 and 1994 and the general type of assets they acquired. We developed an initial short list of organizations that provided coverage across various departmental levels of government and asset types. The short list consisted of the Army Corps of Engineers, the Coast Guard, the Forest Service, the Food and Drug Administration (FDA), the General Services Administration (GSA), the National Oceanic and Atmospheric Administration (NOAA), and the U.S. Geological Survey (USGS). We reviewed our past work and other literature to identify organizations among the short list that had expressed difficulty in acquiring capital assets and/or were using a financing mechanism that helped alleviate this difficulty. After conducting initial interviews with officials at each of the short list organizations to confirm the issues they face and the assets acquired, we agreed with the requestor to select five case studies representing four agencies: the Army Corps of Engineers, Coast Guard, GSA (the Public Building Service (PBS) and the Interagency Fleet Management System (IFMS)), and USGS. Our selection of case studies was based on a goal of choosing organizations that reflected diversity in the types of assets acquired, the volume of capital spending, the type of account used, and the appropriations subcommittees. Table 1.1 (see p. 33) shows the types of assets case studies acquire and the account(s) used to finance capital. After conducting more extensive interviews with officials of our case studies, we discussed the organizations’ problems and financing mechanisms with staff of the case studies’ House and/or Senate appropriations subcommittees, as well as OMB program examiners and policy specialists. GAO requested comments on a draft of this report from the Secretary of Defense, the Secretary of the Army, the Secretary of Transportation, the Secretary of the Interior, the Acting Administrator of GSA, and the Director of OMB. At meetings conducted in August and September of 1996, these officials’ designees provided their comments. Their comments are discussed and evaluated in chapter 6 and certain other sections of the report as appropriate. To examine the responses to OMB Bulletin 94-08 on “Planning and Budgeting for the Acquisition of Fixed Assets,” we reviewed submissions OMB received from agencies. We discussed the bulletin with officials of each of our case studies and with OMB officials responsible for the bulletin’s development and implementation. We also had discussions with OMB to determine differences in the responses to and results of OMB’s second bulletin on fixed assets (Bulletin 95-03). To improve the currency of our discussion of OMB’s fixed asset efforts, we also reviewed OMB’s A-11 guidance to agencies on submitting their fiscal year 1998 budget requests. Capital spending data in appendixes I through VI and chapter 1 were derived from OMB’s MAX system. Although we did not verify this data at the individual budget account or organizational level, total obligations in the object class and program and financing schedules and total outlays in the character class schedules were reconciled by fiscal year to published sources. We performed our work from June 1995 through February 1996 in accordance with generally accepted government auditing standards. The Adequacy of Appropriations Act and the Antideficiency Act require that resources be available to fulfill government commitments to pay for goods and services when the commitments are made, or up-front. However, officials at the organizations we contacted typically viewed the requirement as an impediment to their meeting capital asset needs. Managers expressed concern that their agency or program budgets are not able to accommodate the large, single-year increases in budget authority needed to fully fund capital projects up front. As a result, managers believe that capital needs are either not met or met through methods that are more costly in the long term. Despite the potential problems for individual agencies, up-front funding is critical to safeguarding Congress’ ability to control overall federal expenditures and to assess the impact of the federal budget on the economy. Without up-front funding, projects may be undertaken without adequate attention being given to their overall costs and benefits. Moreover, failure to fully fund projects before they are undertaken can distort the allocation of budget resources and obscure the impact of federal budgetary action on the private sector. Only a few agencies, including the Army Corps of Engineers (one of our case studies), have been exempted from the up-front funding requirement. Despite these agencies’ use of incremental funding, OMB has taken steps to encourage consistent application of up-front funding across government in the future. Managers in most of the organizations we contacted cited requirements for full up-front funding as an obstacle to acquiring capital assets. These officials felt that when it is necessary to purchase expensive capital assets, up-front funding requirements result in a spike in their agency’s or program’s budget authority that often would not be provided in the current budget environment. Although an asset may be an important component to carrying out the mission of the organization and may bring benefits over many years, managers believed that having to budget for the full cost in 1 year is often a significant impediment to its acquisition. Although general resource constraints are not new, full up-front funding has become more difficult because most capital spending is discretionary and, thus, annually capped by BEA. OMB has responded to BEA by frequently imposing limits on agency spending and by prohibiting agency borrowing. Consequently, managers may find themselves faced with a situation in which funding an expensive capital project may require deep cuts in operations or in all other capital projects during that year. Faced with these trade-offs, agency managers may either delay capital projects until an additional appropriation can be obtained or, when possible, look for other ways of meeting their capital needs though the long-run cost may be higher. Officials from virtually every organization that we contacted could cite examples of how the up-front funding requirement affected their ability to acquire capital. Up-front funding appeared to be a particularly significant issue at organizations we contacted that acquire buildings because these assets often have a high initial cost, provide benefits over many years, and could be financed over an extended period of time. Up-front funding was also a concern for USGS in acquiring equipment because the cost of the equipment sometimes represented a significant portion of the organization’s resources. PBS has often cited the up-front funding requirement as an impediment to meeting federal agency space needs in the most cost-effective manner. PBS is responsible for acquiring general and special purpose work space for federal agencies and has multiple methods available for meeting these space needs, including operating leases, capital leases, lease-purchases,and direct purchases. Each of these methods for obtaining space presents a combination of advantages and disadvantages in terms of flexibility and short- and long-term cost to PBS. Budget scoring rules are intended to facilitate comparisons of the long-term cost of each method and to ensure compliance with the full funding concept. For each space acquisition method except for operating leases, PBS (like other federal organizations) is required to have budget authority for the total cost up front even though the outlays may occur over several years. PBS has generally found that ownership is the least costly manner with which to meet long-term federal space needs. However, PBS officials indicated that the up-front funding requirement coupled with caps on total discretionary budget authority and outlays has resulted in PBS not receiving sufficient budget authority to allow it to own the amount of office space that its studies indicate to be optimal. PBS has maintained that by relying on operating leases instead, the government incurs a higher long-term cost and consumes resources that could be used for repairs and alterations of the existing inventory. Other organizations felt similar constraints on their ability to obtain or replace facilities. Coast Guard officials, for example, cited a need for new employee housing. The Coast Guard prefers to satisfy housing needs by providing allowances to employees to rent from the private sector. However, in remote or resort areas of the country where affordable rental housing is not available, the Coast Guard constructs housing. Coast Guard officials stated that even though the housing fulfills a long-term need, they must budget for the full cost in a single year, which generally limits the number of capital projects that can be undertaken. Officials at the Forest Service also felt that up-front funding requirements in conjunction with resource constraints prevented them from making investments in buildings and facilities. Many of the agency’s facilities are in very poor condition and in need of repair or replacement. However, Forest Service managers say they are not able to obtain the large increases in appropriations needed to meet these one-time costs. FDA officials also felt that up-front funding was an obstacle to acquiring needed facilities. They felt that some of their facilities were in need of repair or replacement, but that many of these cannot be undertaken because their cost must be budgeted for up-front. In addition, FDA has been waiting for a number of years to obtain funding to consolidate headquarters staff that are currently spread out across many different locations in the Washington, DC metropolitan area into fewer sites. FDA officials believe that the segmentation of their facilities increases their operating cost and makes it harder to fully use some pieces of equipment that could be shared if staff were consolidated into fewer facilities. Although possibly problematic for individual agencies, up-front funding has long been recognized as an important tool for maintaining governmentwide fiscal control. The requirement that budget authority be provided up-front, before the government enters into any commitment, was established over 100 years ago in the Adequacy of Appropriations Act and the Antideficiency Act. These acts responded to past problems in which agencies committed the government to payments that exceeded the resources made available to them by Congress. The importance of the principle was reinforced by the 1967 Report of the President’s Commission on Budget Concepts, which emphasized the primary purposes of the budget as being the efficient allocation of resources and the formulation of fiscal policy to benefit the national economy. The up-front funding requirement advances both. It is essential for efficient resource allocation decisions because it helps ensure that the Congress considers the full cost of all proposed commitments and makes trade-offs based on full costs. To be useful in the formulation of fiscal policy, the budget must be able to highlight the impact of the federal budget on the economy. For this purpose, the requirement for up-front funding also serves the Congress well. The point at which capital spending has the largest and most direct economic impact on the private sector occurs at the point the commitment is made—that is, up-front—not over the expected lifetime of a long-lived asset. Failure to recognize the full cost of a particular type of expenditure when budget decisions are being made could lead to distortions in the allocation of resources. In other words, if particular types of spending, such as for physical assets, were given preferential treatment in the budget by virtue of recognizing only a fraction of their total cost, then it is likely that relatively more spending for those types of assets would occur. While advocates for purchasing some federal assets may see this as a desirable end, such an outcome may not accurately reflect the nation’s needs. In particular, other types of federal spending that also provide long-term benefits but that are not physical assets (including research and development and spending for human capital) would be arbitrarily disadvantaged in the budget process, even if national priorities remain unchanged. Furthermore, failure to fully fund capital projects at the time the commitment is entered into can force future Congresses and administrations to choose between having an unusable asset and continuing projects’ funding for years even after priorities may have changed. For example, if the Congress provides funding for only part of a project and that part is not usable absent completion of the entire project, then the Congress and the administration may feel compelled to continue funding in the future to avoid wasting the initial, partial funding that was already spent. Thus, if capital projects are begun without full funding, future Congresses and administrations may, in effect, be forced to commit a greater share of their annual resources to fulfilling past commitments and thus have less flexibility to respond to new or changing needs as they arise. Although the organizations we contacted may perceive it to be difficult to obtain full funding in a single year for capital assets, OMB and the Congress have at various times accommodated agencies’ needs for large increases in budget authority to fully fund their capital projects. However, given overall resource constraints, all of the capital needs (and operating needs) that agencies may have or perceive cannot be met. Thus, an agency’s failure to receive funding for its capital request may reflect the fact that, on a governmentwide basis, other agencies’ capital projects are of higher priority to OMB or the Congress. It also reflects governmentwide trade-offs that are made to continue funding operations of one agency over increases in capital spending at other agencies. Although up-front funding is generally required across government, it is not applied to all agencies. Water resource projects were explicitly exempted from up-front funding by the Rivers and Harbors Appropriation Act of 1922. As a result, the Corps of Engineers implements many of its construction projects through the use of continuing contracts. These contracts cover the entire project but indicate the amount of work that is expected to be completed during each year and the cost of that increment. Although the Congress is aware of the total expected cost of the project, the Corps annually requests funding for the projects in increments—only the amount of money necessary to complete the next year’s portion of work. The Corps’ contracts are structured so that it is not committed to paying for any additional work on a project beyond that specified for the budget year. If the Congress were to discontinue funding for the project at some point during the overall contract, the Corps would be responsible for paying the contractor various cancellation or decommissioning costs. However, while the Corps is not legally obligated to complete an incrementally funded project, terminating it before completion can leave the Corps without anything of economic value. Corps officials suggest however, that because of the costs that have already been incurred and the economic justification that is done before beginning any project, it is unlikely that the Congress would choose to cancel a project for fiscal reasons once it is begun. In fact, the officials indicated they are not aware of any Corps projects that have been cancelled by Congress. The Energy and Water Development appropriations subcommittees have been comfortable with incrementally funding the Corps and other agencies within their jurisdiction (such as the Bureau of Reclamation and DOE) and have not changed the practice. Officials from OMB and the Corps indicated that the Carter administration had proposed to the Congress fully funding Corps construction projects, but full funding was rejected because it would have required either a large increase in appropriations or a significant drop in the number of projects that could be undertaken in a given year. One of the traditional concerns with incremental funding is that it risks allowing projects to be started before adequate scrutiny is given to their total cost and benefit. Some within OMB have suggested that this may not be as much of a concern with the Corps, in part because both OMB and the Congress have had confidence in the Corps’ total cost estimates because of the historical reliability of its cost-benefit justifications. Thus, the Congress is aware of the costs and the benefits of a project before it is authorized. OMB officials also indicated that other factors contribute to ensuring that projects are managed cost effectively. For example, state or local authorities that act as financial partners in Corps projects have a strong incentive to ensure that projects are well-managed. In addition, project authorization levels limit the amount of additional appropriations the Corps can obtain for cost overruns. OMB has acknowledged that agencies have not always requested or received full up-front funding for capital acquisitions. Besides the Corps of Engineers, some capital projects at the Bureau of Reclamation, DOE, and NASA have also been funded incrementally. One of the objectives of OMB’s bulletins on fixed assets (Bulletins 94-08 and 95-03) was to identify the extent to which incremental funding was being used and to encourage agencies to request full funding for their capital projects. Estimates are still being refined by OMB as to what the total cost would be to fully fund all projects currently funded incrementally. In the fiscal year 1997 President’s budget, OMB requested $1.4 billion in budget authority to fully fund selected ongoing projects in DOE and NASA that otherwise would have been incrementally funded. Although full funding was not requested for capital projects at the Corps of Engineers and the Bureau of Reclamation, the President’s budget indicated that the cost of fully funding ongoing and new projects for these two agencies would be about $23 billion in fiscal year 1997 (which represents 11 percent of total domestic discretionary budget authority in fiscal year 1995). The implications of fully funding capital projects—including those that have been incrementally funded—will be clarified for the government as a whole when agencies submit their fiscal year 1998 budget requests to OMB. The principal effect will be to increase budget authority in the initial year for projects that would otherwise be incrementally funded over a period of years. Because projects’ cash flows would be unaffected by the application of up-front funding, the government’s total annual outlays would also not change for a given level of capital projects. For the longer term, the impact of such a shift on future years’ budget authority will be a function of whether policymakers change the number or types of capital acquisitions in response to the up-front funding requirement. Case studies use a variety of methods for adapting to the requirement to fully fund capital acquisitions up-front. Some of these methods demonstrate a balance between managerial flexibility and congressional control. They include: budgeting for stand-alone stages of an acquisition, revolving funds, an investment component within a working capital fund, reducing capital needs, and operating leases. Several of these approaches to financing capital may be worthwhile for other agencies to consider to help accommodate the up-front funding requirement. For example, one case study uses contracting strategies that are designed to limit the government’s commitment and spread the amount of budget authority needed over a period of years. Under certain conditions and for certain types of capital acquisitions, revolving funds and investment-type accounts can serve to manage the spikes in resource needs that are created for an agency by up-front funding. Case studies have also pursued strategies intended to reduce their need to own capital assets and to lower their overall cost of operations so that capital spending may be more easily accommodated. Yet some case studies, unable to meet long-term capital needs with current resources, use financing methods, such as operating leases, that are better suited for meeting short-term needs and that can lead to higher long-term cost. Finally, officials of some case studies believe that additional tools would be useful, such as borrowing authority and partnerships with the private sector. While these proposed tools would enhance managerial flexibility, they must be considered in light of their impact on congressional control. The Coast Guard requests funding for separate stand-alone stages of large capital projects. In contrast to incremental funding, budgeting for stand-alone stages helps ensure that a single appropriation will yield a functional asset while limiting the amount of budget authority needed. For example, the Coast Guard may structure its vessel and other equipment contracts to acquire portions of such projects that are economically or programmatically useful even if the entire project is not completed as planned. In acquiring a class of ships, the Coast Guard may write a contract for a lead ship and spare parts with options to buy additional ships in future years. By structuring its acquisitions in this way, the Coast Guard can request full funding for each useful piece of the project as the project progresses, rather than requesting funds for the entire project up-front. This strategy reduces the budget authority needed by the Coast Guard to initiate the project and is consistent with full funding because the Coast Guard receives a useful asset from each funded option, though the full value of the asset may not be realized until the entire project is completed. The Coast Guard’s experience indicates that structuring a capital acquisition into fully-funded, stand-alone stages has several advantages to agencies and the Congress. First, it allows agencies to spread the amount of budget authority needed to complete a large capital acquisition over multiple years. For the agency and for the Congress, this can enable more projects to be underway concurrently. A second advantage is that the Congress can exercise more frequent oversight over the progress of the total capital project. As each usable portion of the total project is completed, the Congress has an opportunity to review progress, re-evaluate needs, and decide whether to provide funding for the next segment. Third, budgeting for stand-alone stages of a project gives the Congress greater funding flexibility to respond to changing needs or national priorities. If changing circumstances dictate that other needs are of a higher priority, the Congress can discontinue the project at an appropriate juncture, shift funds to the new need, and still benefit from the funds already spent on the stand-alone stages. Agency managers, of course, would prefer to receive funding for the entire project at the outset since that would reduce uncertainty, make project management easier, and possibly lower the cost contractors charge. However, it is appropriate from an overall federal budgeting perspective for projects spanning multiple years and requiring significant resources to be re-evaluated as they progress, with the Congress maintaining the option to end the project. Decisions to terminate or slow down projects reflect current budget priorities given available resources. If projects have been funded in stand-alone stages, such decisions can be made without the concern that past spending has been wasted. On the other hand, even though the assets are usable, their net effectiveness may be compromised if the succeeding parts of the project are not completed as well. Four case studies used revolving funds to finance capital assets and manage the spikes in resource needs that can occur with up-front funding. Their experiences indicate that revolving funds can be effective for agencies with relatively small, ongoing capital needs because the funds, through user charges, spread the cost of capital over time in order to build reserves for acquiring new or replacement assets. In addition, revolving funds help to ensure that capital costs are allocated to programs that use capital. However, revolving funds do not always work as intended. For example, while revolving funds are intended to be self-financing, PBS’ revolving fund has faced several structural constraints that have limited its ability to satisfy customer needs with the fund’s rental income. Case studies’ experiences led us to conclude that revolving funds will be most effective when they possess certain characteristics—sound financial management, identifiable customers to charge, the ability to recoup replacement cost, appropriations to fund major expansions to the asset base, and the ability to retain proceeds from the sale of assets when expected to maintain the same size asset base. In addition, to ensure opportunities for oversight and control, revolving funds also need to have capital plans, including expected benefits from the acquisition against which actual benefits may be judged. Equally important, for revolving funds that acquire large-scale and heterogeneous assets, the Congress and OMB must be able to annually review whether proposed acquisitions are those most needed and whether the overall level of capital spending by the agency is appropriate given other competing capital and operating needs across the government. Case study organizations showed that revolving funds are neither a new nor rare tool in budgeting for capital assets. Case studies also demonstrated that revolving funds can be used in a variety of circumstances. At some case studies, the revolving funds primarily provide assets to external customers, while at others, the assets are used primarily to support internal operations. However, regardless of the particular types of assets or the customers to whom the services are provided, revolving funds relied on charges to users to fund ongoing maintenance and replacement of capital assets. The Corps of Engineers has used a revolving fund since fiscal year 1954 to finance equipment and facilities shared by multiple Corps civil works projects and programs. The original cost of the equipment is charged as a depreciation cost to the projects or programs that use it. In addition, user charges are set to recover expected increases in the asset’s price. By including depreciation and inflation in its charges to users, the revolving fund ensures that resources are available to buy new equipment when necessary. The Congress established USGS’ working capital fund (WCF) in fiscal year 1991 to finance replacement of the agency’s mainframe computer, telecommunications equipment, and related automated data processing (ADP) equipment. The WCF grew out of USGS’ experience in having to finance a telecommunications upgrade and mainframe computer from annual appropriations. USGS recognized that it needed a way to plan for the augmentation or replacement of these acquisitions in the future if it was to reduce the one-time impact on operating units. Through the WCF, charges to users will help fund the replacement of these assets. The IFMS uses a revolving fund to finance operations of its fleet of vehicles. Since 1982, IFMS charges to client agencies have enabled it to recover depreciation, operational costs, and an inflation increment. The revolving fund accumulates reserves during the year so that portions of the fleet can be replaced as needed; proceeds from the sale of old vehicles are also applied toward new purchases. The revolving fund is intended to be self-sustaining and IFMS tries to ensure that its user charges are competitive with those of private-sector car rental providers. GSA’s Information Technology Fund (ITF) was initially established in 1987 and currently funds, on a reimbursable basis, federal local and long-distance telecommunications services and ADP technical services. Fees charged to client agencies recover the full cost of services plus contributions to a capital reserve fund. The capital reserve fund finances replacement of ITF fixed assets—primarily PBX and telephone switches used for local phone service. The ITF also uses its capital reserve fund to finance extraordinary operating expenses related to long-distance service and to finance pilot projects. The Federal Buildings Fund (FBF) began operations in 1975 and is the largest of the revolving funds at GSA. PBS, which manages the FBF, charges client agencies rent for buildings it provides for their use. Like other revolving funds, the FBF is intended to be self-financing. The charges to users are intended to cover all costs of operations and replacement and a limited amount of new construction. In practice, the FBF has been faced with customer demands for new space that exceed collections. As a result, PBS has sought appropriations to supplement the Fund’s income. PBS officials cited a number of structural constraints placed on the FBF, such as congressional restraints on the generation and use of FBF income that have prevented it from operating like a true revolving fund. Nevertheless, they believe that the FBF has been a more effective method of financing the maintenance and replacement of assets than was the former process of funding through appropriations alone. In addition to the benefits they provide in smoothing spikes that can result from up-front funding, revolving funds can also help agencies and the Congress better monitor program costs by promoting full cost accounting. Although full funding up-front leads to recognition of the full cost of commitments in the year made, when agencies finance capital through appropriations, the annual capital cost incurred in carrying out a specific program is not apparent in that program’s budget. Revolving funds can ensure through their user charges that the full cost of programs—including capital usage—is borne on an annual basis by those responsible for the program rather than passed on to future users. At an agency level, revolving funds incorporate traditional capital budgeting concepts and can result in charging users for capital consumption without violating up-front funding principles for the federal government as a whole. As GPRA is implemented, full costing will take on even greater importance as managers will need to assess whether their programs are achieving goals in a cost-effective manner. When the budget does not clearly identify all costs associated with a program, including capital usage, agencies and the Congress cannot make fully informed trade-offs among programs because some programs appear cheaper than they are. Costs tied directly to capital usage also provide an incentive for agency managers to use capital more efficiently. In some cases this may lead them to reconsider whether they need the same quantity or type of fixed assets as previously thought. For example, as rent charges for work space become a greater burden for agencies (because of stagnant or declining annual budgets), it is reasonable to expect that more agencies will become concerned about their use of space and the resources it diverts from other purposes. Establishing economic incentives for agency managers to make their own trade-offs between capital and operations based on full costs is likely to lead to more efficient decisions about appropriate levels of capital assets. Officials at IFMS and PBS expressed concern over financing constraints and/or underfunded responsibilities that could impede their revolving funds’ ability to operate efficiently. The FBF in particular, has traditionally faced constraints on its ability to generate income. The FBF has also been faced recently with responsibilities that were not anticipated at the Fund’s inception. The IFMS’ full-cost recovery pricing system has covered the costs of maintaining and replacing its fleet, but IFMS officials believe additional new requirements on IFMS may make cost recovery and remaining competitive more difficult in the future. The Energy Policy Act of 1992 requires that by fiscal year 1999, alternatively fueled vehicles must comprise at least 75 percent of the total number of new vehicles acquired by a federal fleet. Although law requires DOE to fund the incremental acquisition costs of alternatively fueled vehicles over their conventionally fueled counterparts, DOE officials indicated to IFMS that DOE had only a portion of the incremental funding needed for fiscal year 1996. Depending on the number of vehicles converted, IFMS officials thought that the remaining cost in fiscal year 1996 could be absorbed through operational efficiencies. However, the fund may not be able to accommodate future costs if advances from DOE continue to decline or cease altogether. Although a revolving fund should fully recover its costs through user charges if it is to be self-sustaining, this has not been the case with the FBF. The imbalance between the FBF’s costs and its income lies in part in the inherent structure of the Fund. FBF rent charges to agencies are not necessarily sufficient to cover full costs because they are not based on the actual costs to PBS. In some cases, PBS’ repair and maintenance costs are higher than the average for office buildings because it must maintain some of its office buildings as heritage assets. Since FBF charges agencies for their use of owned and leased space based on market appraisals made every 5 years, actual costs to maintain the space and FBF payments to the private-sector lessor may vary from the rental income FBF collects. Adding to these constraints on PBS’ cost recovery have been caps on rent. During the 1980s, the Congress believed some PBS rental charges were too high and imposed caps on the rents of some agencies. Although only three agencies currently have rent caps, PBS estimated that the caps have caused substantial income losses over the years. Financing office space to satisfy customer needs may also be more difficult because the FBF is not authorized to retain the proceeds from the disposal of property. When PBS property is sold, all disposal proceeds are required by law to be deposited into a land and water conservation fund.The other revolving funds operated by our case studies can retain disposal proceeds and have fewer restrictions on the disposal of assets. For the Corps of Engineers, the disposal proceeds are only a minor source of funding, but for IFMS they represent a substantial portion of operating income. Constraints on income have been exacerbated by demands to expand PBS’ asset base. During the 1980s, demands for courthouse construction began to rise significantly. Although PBS responded to early courthouse construction demands by deferring maintenance on other assets, PBS sought and received appropriations for courthouse construction in fiscal year 1991 to supplement the Fund’s rental collections. The FBF has since continued to receive appropriations for construction of courthouses, border stations, and office space. However, PBS estimates that the present level of appropriations funds about half of construction costs. The remainder of the costs are primarily being covered by FBF rental collections, which are also used for funding repairs and modernization of the existing assets. Despite their benefits in smoothing out spikes in resource needs, revolving funds are not necessarily appropriate for all agencies or in all circumstances. Our review of case studies’ revolving funds, as well as previous analysis of specific revolving funds, has led us to draw some conclusions about the characteristics needed for successful revolving funds. First, agencies using a revolving fund need to have demonstrated a sound record of financial management. Financing capital through a revolving fund can entail a lesser degree of congressional control than direct appropriations. Not all agencies may have demonstrated a sufficient stewardship of government resources to warrant a reduction in congressional oversight. Good financial management can be even more important if revolving funds rely on charges to other agencies for income and are not subject to competition because, under such circumstances, revolving fund managers may have less incentive to control costs. Sound internal controls and oversight by management are needed to ensure that revolving fund efficiencies are not neglected because costs can be passed on to its users. When external competition that can provide an incentive for cost-consciousness is absent and when fund acquisitions are expensive, revolving funds may need a greater degree of congressional oversight. Second, for a revolving fund to be effective, the agency must be able to identify clearly the appropriate customers to charge and the actual capital cost that each customer incurs. If this is not possible, a revolving fund is probably not practical. For example, officials at the Coast Guard indicated that because of their organizational structure and overlapping missions it would be impractical for them to use a revolving fund. They explained that many Coast Guard assets are used by units in carrying out multiple activities—such as defense operations and law enforcement—so that it is potentially more difficult to assign cost to a specific mission or activity. They also stated that it would be inappropriate to charge some users of capital. Since mission responsibilities are often tied to carrying out search and rescue, law enforcement, and maritime environmental protection activities, fees attached to those activities could create perverse incentives. Coast Guard officials want to encourage units to use the most appropriate assets for carrying out their missions and not to be inappropriately influenced by cost considerations in what is often an emergency situation. Third, to be successful in the long-term, revolving fund managers must know their full costs and have the authority to charge fees that recover the cost of operating and replacing assets. Without replacement cost pricing, the resources of the fund would eventually be depleted by inflation. In addition, the accounting system of the agency must be able to track costs accurately. All agencies do not have adequate systems to allow them to fully allocate all costs associated with running a particular program or activity. Fourth, to be self-sustaining, the revolving fund should be adequately funded initially and should receive additional resources when significant increases in its asset base are immediately required. If fees are established in order to meet a specific level of capital need, and that level increases, then some additional resources must be made available for the fund to remain self-sustaining. The additional resources could come from operational savings that are achieved, higher fees to users, or an external injection of funding (i.e., an appropriation). For example, IFMS must expand its service level to include more expensive, alternatively fueled vehicles but is hesitant to either delay vehicle replacements or raise rates and risk losing customers. IFMS officials believe that some of the cost can be funded through operating efficiencies but that additional funds will be necessary if the requirement cannot be modified. Likewise, if PBS must increase the size of its inventory to meet customer demand and past collections have not been designed to fund expansion, then appropriations may need to be considered. Existing reserves may be able to fund expansions of the asset base or service level in the short-term, but using these reserves would ultimately deprive the existing users from having their own assets repaired and replaced. Also, while providing a funding source for asset base expansion, increasing the fees charged to current users may make them pay more than the cost they are responsible for incurring, thus distorting the cost shown in the users’ budgets. Conversely, if demand for the revolving fund’s capital assets declines, resources could be taken out of the revolving fund to be used for other purposes across the government. This is especially the case for a revolving fund that purchases relatively large-scale and heterogeneous assets. Fifth, if they are to provide a constant level of service, revolving funds typically need to have the flexibility to retain or dispose of assets based on their economic value and be able to reinvest the proceeds in the fund. If a revolving fund is to operate in a business-like fashion, its managers must be able to determine when it is more efficient to invest in new assets than to retain and operate existing assets. If revolving funds tasked with providing constant levels of services are not able to dispose of under-performing or unnecessary assets and retain the proceeds, capital allocation decisions may be distorted. For example, PBS officials cited an experience where their financial analysis indicated that they should sell a building and use the proceeds to acquire alternative space. Although they could still use the building, in the long-term it would have been a more efficient use of resources to purchase new space. However, because PBS is prevented by law from keeping the sale proceeds, PBS retained the building. Finally, revolving funds, like other funding mechanisms, must operate within an environment of controls if the Congress and OMB are to ensure that resources are well spent and that capital acquisitions reflect the government’s highest priorities. Because revolving fund purchases need not be reviewed by the Congress or OMB, traditional revolving funds may not be appropriate when competition for the fund’s services is lacking and when purchases are relatively large-scale, sporadic, or heterogeneous. Under these conditions, a greater degree of oversight is warranted to ensure that the resources accumulated in the fund are used where most needed governmentwide. Such assets might include buildings and courthouses acquired through the FBF. In contrast, revolving funds that compete with private-sector service providers and that make relatively routine purchases of small-scale, homogeneous assets such as vehicles, may warrant relatively high degrees of autonomy because the external factor of competition forces revolving fund managers to control their costs and effectively allocate resources. Another mechanism being used to ameliorate agency problems with up-front funding requirements is USGS’ creation of an investment component within its working capital fund (WCF). The investment component is designed to encourage USGS managers to do better long-range planning for equipment purchases and to enable them to accumulate over time the resources they need to fund capital up-front. In this sense, the WCF investment component operates much like a savings account for a manager at any level to fund capital acquisition. In contrast to a more traditional revolving fund, users of the investment component make voluntary contributions for prospective capital purchases, rather than being charged retrospectively for capital usage. The investment component is a capital financing mechanism that could be useful for other agencies as well. However, expanded use must be accompanied by adequate controls on agency and governmentwide investment component spending to ensure that funds are used as intended and to prevent increases in the deficit. USGS received authority from the Congress to expand its investment component within its WCF to assist in funding laboratory operations, facilities improvements, and replacement of scientific equipment beginning in fiscal year 1995. The investment component was proposed by USGS in response to difficulties experienced in obtaining appropriations for increasingly costly equipment. Over time, USGS had found that an increasing proportion of its annual appropriation was dedicated to fixed operating expenses, such as salaries and rent, with little left for funding long-term capital purchases. Furthermore, since USGS’ appropriation was entirely one-year money—expiring at the end of the fiscal year—the agency was not able to accumulate unobligated balances over a number of years to use for occasional, expensive purchases. To use the investment component, USGS managers at any level within the organization develop and submit an investment plan, which must be approved by a delegated authority within the respective division or the agency as a whole. The investment plan specifies the asset to be acquired, the estimated acquisition or replacement cost, the number of years required to fund the acquisition, and the schedule of deposits into the fund (annually, quarterly, or monthly, for example). After the investment plan is approved, the division periodically obligates the planned contribution amount from its annual appropriations and pays it to the investment component of the WCF, where it remains available for obligation. Once in the investment component, the contributions can be saved until a sufficient sum—as specified in the investment plan—has been accumulated to purchase the planned asset. The USGS has imposed internal restrictions on the fund to prevent abuse of the authority. For example, the contributions must be made for at least 2 years prior to the purchase and may not be used for the construction of buildings. Once the plan is approved, contributions to the investment component are held for the specified purpose without fiscal year expiration. Although it has little history thus far, the WCF investment component conceptually is a unique and useful way for individual agencies to plan for and finance capital assets. None of the officials we talked with at USGS, OMB, or the House of Representatives Appropriations Committee, Subcommittee on the Interior, were aware of any other federal organizations using a similar financing mechanism. Nevertheless, the investment component has several benefits and may be a useful tool for other agencies, especially those with annually expiring funds. First, it encourages agencies to use long-range planning to alleviate the effects of up-front funding capital. Managers must anticipate the capital needs they will have in the future and submit a plan that indicates specifically how they expect to fund the asset need. An investment plan requires the agency to justify spending in advance of receiving the appropriations that will fund contributions. It also gives agencies an incentive to make their own trade-offs between operations and capital and to strive for savings in operations. The investment component achieves this by permitting agencies to set aside annual resources for future capital purchases. While agencies may have some incentive to look for savings in operations even without an investment component, the mechanism provides an impetus to make cuts in operations that may not exist otherwise. A third advantage of the investment component is that it facilitates agencies funding their highest priority asset needs. When agencies do not have sufficient annual resources to make a particular capital purchase, they may be inclined to devote the resources to acquiring other—possibly less critical but less expensive—capital assets rather than see the funds expire at the close of the fiscal year. And finally, the investment component would contribute toward making program and operating budgets better reflect their cost of capital usage. The investment component will not be as efficient or accurate at allocating capital costs as a revolving fund since it lacks the direct linkage between capital use and charges. However, because contributions are made from the operating budget, the mechanism does help facilitate a more systematic incorporation of capital costs into program expenses. Despite the potential benefits from investment components, problems could arise if investment accounts were widely used throughout government without adequate controls. For example, if several agencies obtain investment components and each decides to make large purchases in the same year, total outlays could rise sharply and cause a spike in the deficit. Therefore, OMB will need to manage all investment components to ensure that the total investment component outlays do not cause such spikes, even though this may result in deviations from the schedule specified in the agency’s original investment plan. Furthermore, if the Congress permits agencies to use such investment components, it is giving them relatively more control than they currently possess over the use of their appropriation. Investment component control issues are similar to those of revolving funds (discussed previously in this chapter); thus the Congress would need to have similar confidence in the financial management abilities of agency officials before it permits the establishment of an investment component. Once established, managers should prepare and be held accountable to investment plans to ensure investment component funds are used as intended. The investment component concept is premised on program managers being able to plan for fixed asset acquisitions by accumulating funds over a period of years and applying them toward a future capital need. USGS officials felt that potential congressional actions to re-allocate these funds, such as rescissions and reductions in future appropriations, would create significant disincentives for managers to contribute. Likewise, these officials felt that program managers would be less likely to contribute if top-level management used contributions for purposes other than those in the investment plan. Though a promising tool, the investment component can have limitations to its usefulness. Agencies already faced with tight operating budgets may have little to contribute to such an account without making difficult trade-offs with operations, potentially including personnel cuts. Although increasing numbers of agencies have been confronted with downsizing in recent years, some appropriations subcommittee staff still question the willingness of agencies to voluntarily trade-off personnel for capital assets. Furthermore, capital assets must still be budgeted for in advance of any savings they may generate. Capital acquisitions that could “pay for themselves” over time still could not be funded without the agency first carving out funds from elsewhere to pay for them. In an era in which agencies are already faced with budgets that require significant cuts in operations, it is unknown how much willingness may exist among agency heads to exact even deeper cuts in order to fund capital. Another way that case studies have dealt with the up-front funding requirement is to take actions that reduce their need to own fixed assets. Two of these strategies include contracting out for goods and services and cooperative arrangements to share assets. For example, officials from the Corps of Engineers have indicated that some functions for which they formerly acquired capital assets—such as producing crushed aggregate—can now be performed by the commercial market at less expense. It is likely that in other agencies as well, government managers have found that increasing specialization among contractors enables agencies to acquire some capital-intensive services more cheaply externally than they can be performed in-house. Contracting out can be useful and cost-effective when asset needs are short-term and non-recurring. However, agencies still incur expenses to monitor contractor performance, and contracting out can be misused to by-pass budget scoring rules for purchases. When the latter occurs, the long-term cost of contracting out can be higher than directly purchasing the asset. Where practical, USGS has entered into long-term cooperative arrangements with universities and states to share the purchase and use of capital assets that are not needed full-time. Under such arrangements, USGS uses the equipment as needed without bearing the full costs of ownership. Although this arrangement has little fiscal drawback, USGS officials did indicate that some federal requirements for physical tracking of the property are harder to comply with when the assets do not reside at USGS facilities. Purchasing is only one of several ways in which agencies may acquire capital assets. Agencies may also use various forms of leases to meet asset needs. The three primary types of leases are operating leases, capital leases, and lease-purchases. Each represents a different degree of risk and financial commitment borne by the government and budget scoring rules are designed to reflect these differences. Operating leases offer agencies the greatest flexibility with the least risk and financial commitment. For short-term needs, operating leases can be the most cost-effective means of acquiring capital assets. However, because of resource constraints and more favorable budget scoring rules, some agencies have substituted operating leases for more cost-effective means of meeting long-term needs. A refinement in the definition of operating leases may be needed in order to assure consistent application of the up-front funding requirement and better comparisons of financing options. Analyses have shown that ownership of capital assets is generally the most cost-effective method for meeting long-term capital needs. However, differences in budget scoring can sometimes affect an agency’s selection of an acquisition method. Budget authority and outlays for purchases and lease-purchases where the government assumes substantially all risk, must be scored up-front, regardless of when the actual outlays occur. Budget authority for capital leases is scored up-front with outlays scored over the lease period. These scoring conventions were adopted to recognize the full extent of the government’s commitment and to facilitate comparisons of the long-term cost of the various financing methods. Operating leases, in contrast, are intended primarily to meet short-term capital needs. Budget authority and outlays for operating leases are scored over the lease period in an amount equal to the annual lease payments. Because of these budget scoring conventions, however, a long-term operating lease will require considerably less budget authority during the initial years than would a capital lease or a lease-purchase of the same duration. This difference in up-front cost, coupled with resource constraints, has led some agencies to use operating leases to meet long-term needs—even though the long-term cost of such leases is projected to be higher. Officials at PBS indicated that their organization has frequently used operating leases to acquire office space when budget resources were inadequate for purchases. PBS officials have been faced with customer demands for long-term office space that exceed that which PBS can purchase given its available budget resources. As a result, PBS has entered into operating leases in order to meet agency demands for space. Although such leases could be used as an interim measure until such time that a purchase is possible, in many cases the leases have become a more expensive, long-term solution to agency space needs. IFMS officials indicated that they have also used operating leases in lieu of purchases when budget resources were insufficient. IFMS’ take-over of the management of the Department of Defense’s (DOD) fleet of vehicles placed additional demands on the resources of the IFMS. IFMS determined that sufficient resources were not available to fund replacement of the DOD vehicles and so turned to operating leases as a means to acquire new, more cost-effective vehicles for DOD until funds could be accumulated in the revolving fund for purchases. IFMS officials believe that vehicle purchases would have been more cost-effective but that leases were needed to meet immediate customer needs when budget resources were not available. Some case studies did not consider operating leases to be a viable alternative to ownership because the assets they acquire tend to be somewhat specialized. To the extent that the commercial market for the asset is small, it is less likely that leasing will be feasible. For example, Coast Guard and USGS officials said that leasing ships and some scientific equipment, respectively, were not viable options for meeting their capital needs. These officials generally indicated that purchases are the most cost-effective method of acquiring capital assets for their organizations. Operating leases can provide an important measure of flexibility to agencies to meet short-term capital needs without incurring the cost and long-term obligation of ownership. For federal office buildings, factors such as governmentwide downsizing, changing conditions in the real estate market, and uncertainty about agency missions all make operating leases a valuable tool for the federal government to manage its asset requirements in the face of uncertainty. PBS has maintained that part of its portfolio should be in the form of leased space in order to preserve a degree of flexibility to respond to changing needs. It is important that operating leases have a budgetary treatment that allows them to be available to meet genuine short-term needs. However, deficiencies in the current budget scoring rules have resulted in an over-reliance on operating leases and need to be rectified. Previously, we have noted that applying the principle of up-front full recognition of the long-term costs to all options for satisfying long-term space needs—purchases, lease-purchases, or operating leases—is more likely to result in selecting the most cost-effective alternative than applying the current scoring rules. Operating leases were not intended to be used as a substitute for ownership. When operating leases are used to meet long-term needs, the total cost of the project decision—spread over many years as lease payments—is understated in the first-year’s budget. When operating leases are used to avoid up-front budget scoring, the agency may be using a financing method that is more costly in the long-run. Ideally, budget scoring should be neutral in its effect on decision-making. However, current scoring rules are driving some decisions to use operating leases. For space acquisition, neutrality would be better accomplished by recording in the budget the long-term cost of space regardless of the type of financing. If this is done, the agency’s decision-making about which financing option is used would be driven by what makes the most sense economically and programmatically, not by what scores most favorably in the budget. PBS officials have suggested that there would be less need to use more expensive operating leases if budget authority for lease-purchases was still scored over the term of the lease, as it was prior to BEA. However, the change in scoring for lease-purchases was necessary to recognize the full commitment of the government and to ensure compliance with the requirement of up-front funding. The budget now recognizes the higher cost typically associated with lease-purchases compared to direct purchases. Officials at OMB stated that some operating leases currently in use for long-term needs are really more like capital leases because the buildings have been or will likely be leased for the bulk of the asset’s life. They indicated that such leases ought to have budget authority scored up-front. Although it may be difficult for policy makers to know for certain when a capital need will be long-term, some OMB officials believe that a tightening of the definition of an operating lease is warranted to ensure that the budget process leads to better economic decisions. Officials at some case study organizations indicated that they would be able to better meet their capital needs and the requirements of up-front funding if they had additional financing tools available. IFMS officials, for example, believe that authority to borrow from the Treasury against the value of the fleet would help them manage resources more efficiently. Similarly, PBS officials desired authority to borrow against future rental income to finance space acquisition. On the other hand, legislation has been enacted that would allow the Coast Guard to offer loan guarantees and to enter into limited partnerships with nongovernmental entities in order to finance construction of employee housing without bearing the full cost up-front. Officials at OMB and the appropriations subcommittee staffs expressed concern that allowing agencies to borrow against their assets would pose a threat to governmentwide fiscal control by permitting agencies to create budget authority without receiving appropriations. These officials had mixed opinions about the Coast Guard’s loan guarantee and limited partnership proposals and believe that further information would be needed to evaluate their soundness. We agree with their conclusions. An IFMS official stated that current budget rules do not lend themselves to the efficient financial management of business-oriented revolving funds, and that IFMS would like to manage its revolving fund on a “balance-sheet basis” instead. The official stated that limiting the revolving fund’s obligations to those that can be made with the unobligated balances of its budget authority constrains capital spending when balance sheet analysis would suggest that the fund possesses highly liquid resources that could be made available to fund capital acquisition. Managing on a balance sheet basis means that budgetary resources would be re-defined to include the book value of vehicles. Allowing IFMS to manage on a balance sheet basis would be comparable to giving it a line of credit or authority to borrow from the Treasury. This would enable IFMS to purchase vehicles when expanding the fleet, rather than using more costly operating leases. The IFMS official indicated that, in general, authority to borrow would enable them to hold lower cash balances and to manage the fleet in ways that more closely parallel those of private-sector rental car companies. PBS would also like to use borrowing authority to fund capital assets. One PBS official noted that although PBS is often compared with private-sector real estate providers, PBS lacks the financing tools the private sector uses to manage efficiently. For example, private-sector real estate companies can borrow against the value of their long-term leases, but PBS cannot. If PBS could borrow from the Treasury to finance a purchase, PBS officials believe that budgetary resources spent on operating leases could instead be used to repay the mortgage—and at less cost to the government in the long-run. PBS has found that lease-purchases can be more cost-effective in the long-term than operating leases and had used them prior to BEA to finance asset acquisition over time. Borrowing from the Treasury would enable PBS to do the same but at lower cost. Permitting agencies to borrow against the value of their assets is, in effect, allowing them to create budget authority, thus diminishing congressional control and oversight. Officials at OMB and appropriations committee staffs felt that such a practice would inhibit control of total federal expenditures and increase government borrowing. Officials also expressed concern about the consequences if an agency were unable to repay a loan from rental collections and was forced to sell agency assets to make repayments. While the sale of a vehicle raised less concern than the sale of a building, officials felt that regardless of the asset in question, the practice would be difficult to control. The Congress could also be forced into making an appropriation in order to compensate for the shortfall in income. With regard to PBS specifically, OMB examiners felt that the resources going into the FBF were adequate to meet PBS’ needs—given government downsizing and the moratorium on new office space construction. They indicated that if there are needs that cannot be met with the available resources—possibly courthouse construction—the agency should request an appropriation, and that request should compete with other budgetary options. If PBS’ request is not funded, it reflects the fact that OMB and the Congress have established higher priorities elsewhere. Borrowing authority should not be used to circumvent the appropriations process. While PBS, unlike the private sector, may not borrow against the value of its assets, it does receive financing through appropriations. An appropriation would be viewed as a gift in the private sector since it does not have to be repaid nor is it required to produce returns to investors. Recently enacted legislations gives the Coast Guard authority to enter into certain financial arrangements with private-sector developers. This authority, modeled after similar legislation enacted for DOD, provides a variety of tools for the Coast Guard to draw upon. These new tools include authority to enter into limited partnerships and to offer loan guarantees. Each of these could be used as an inducement for private developers to construct housing in remote locations. By underwriting the cost to the developer, Coast Guard officials believe that housing can be obtained for considerably less than if the Coast Guard were to build it directly. Under the equity partnership arrangement, the Coast Guard would pay up to one-third of the cost rather than the full cost of construction. An early DOD’s proposal implied that under this arrangement the developer would receive no rental guarantees but would recoup its investment through rent paid by employees and members of the general public who use the facilities. The government would also be repaid its investment through rental charges. Under the loan guarantee program, the Coast Guard would guarantee loans made to a developer if the proceeds are used to acquire or construct certain Coast Guard housing. Coast Guard officials believe that guarantees could be necessary because of the perceived risk by lenders that the Coast Guard will not be in an area long enough for the developer’s loan to be repaid. Under both of these methods, Coast Guard officials believe they also save by having private developers provide the housing and avoiding expenses that would be incurred complying with construction regulations for federal projects. OMB analyzed the scoring implications of the original DOD proposal in May 1995. This analysis suggested that with equity partnerships, only the government’s equity investment would be scored up-front. It also suggested that only the subsidy cost of the loan guarantee program would be scored up-front. However, more recent discussion with OMB officials has raised questions about whether such arrangements resemble capital leases, and therefore whether a different scoring would apply. An OMB official also suggested that because Coast Guard housing is often in more remote areas than DOD’s, the authority may be less suitable for the Coast Guard than it is for DOD. Where the Coast Guard is virtually the only user of the property, the arrangement more closely parallels a capital lease than an operating lease. This is because there is no private-sector market for the housing and the Coast Guard is providing financing mechanisms that presume it will occupy the housing for more than 75 percent of its economic life. Both of these are key features of a capital lease. It is clear that more detail would need to be available about any specific agreements before a definitive conclusion can be drawn about the appropriate scoring of these proposals or their economic value to the government. In addition to up-front funding, case studies found other features of the budget process and their accounts impaired their ability to acquire capital. Uncertainty over future missions and funding levels, account features that affect trade-offs between operating and capital needs, and constraints on the use of proceeds from asset sales may be impediments from an agency’s perspective. However, the Congress needs flexibility to ensure that the government’s overall spending decisions reflect the nation’s current priorities. Our case studies illustrate that a variety of strategies are available to mitigate impediments for agencies without diminishing opportunities for congressional oversight or flexibility to change funding levels. The Congress and the administration must continually assert control over agency planning and funding decisions to ensure that the nation’s priorities are met. Changes in missions and funding uncertainty are inevitable and justifiable if the Congress is to respond to the nation’s priorities. However, such changes make planning and conducting cost-effective capital acquisitions more difficult for case study managers. Our case studies used mechanisms discussed previously, such as revolving funds and budgeting for stand-alone stages, as well as reprogramming authority, to respond to changes in their political and fiscal environments while preserving Congress’ ability to direct such changes and oversee agency responses. The Congress cannot guarantee steady annual funding streams (beyond that provided for stand-alone stages) if it is to be responsive to changing priorities and resource levels, but the prospect of mission or funding changes can increase the difficulty associated with planning and managing multiyear or risky capital purchases. For example, the Corps can successfully plan cost-effective construction projects only by assuming future funding levels. However, if planned funding fails to materialize, the Corps may have to deviate from these plans, and the project may become more expensive than estimated. Uncertainty over future responsibilities and funding can affect less expensive capital acquisitions with shorter completion times too. For instance, USGS officials speculated that managers may not feel comfortable committing to future WCF contributions for equipment purchases when they cannot predict how much of their future budgets these contributions will absorb. USGS officials also suggested that managers may be reluctant to contribute to the WCF if they believe the Secretary of the Interior might use contributions to meet other priorities. Funding delays or shortfalls can also affect agencies’ abilities to design effective and efficient fixed asset procurement. Although such delays may be warranted by the emergence of higher priorities, the cost of the postponed project is likely to increase. For example, the Coast Guard structures its acquisition strategies to assure contractors of minimum levels of production that will keep costs low. In their response to OMB Bulletin 94-08, Coast Guard officials wrote that funding that is insufficient to support acquisition strategies or rescissions can cause contractor shut-downs and make designs obsolete, adding to projected costs. For example, the response says that, when acquiring the HH-60 helicopter, the Coast Guard paid a premium of $1 million to $2 million per aircraft because funding was not provided to purchase a number of aircraft that would enable the contractor’s production line to operate efficiently. FDA officials said they have been reluctant to fund repairs and maintenance on some current work space because of the agency’s planned consolidation into fewer locations. They also stated that FDA will incur expensive repairs if the existing space continues to be used. As noted previously, revolving funds can provide a steady and secure stream of funding and encourage long-term planning for capital acquisitions while allowing opportunity for some congressional oversight. For example, by recovering depreciation and an inflation increment from users over an asset’s useful life, the Corps’ revolving fund helps ensure that funds will be available to replace the asset when needed and that program budgets absorb the cost of capital. Consequently, managers must plan what and when acquisitions will be made in order to maintain a self-sustaining revolving fund. However, the Corps’ appropriations subcommittees exercise oversight responsibilities by approving every revolving-fund, fixed-asset acquisition of $700,000 or more and implicitly approving all acquisitions through an annual target on revolving fund obligations for capital assets. When agencies experience changes in mission or funding needs, reprogramming can be used to move funds between projects. Because funds are appropriated for specific purposes, the Congress wants to know when substantial deviations from the intended use of funds are made or when needs no longer exist. Therefore, the Congress may place limits on the amount of reprogramming that can be done without its prior approval. In certain situations, these limits may be necessary if the Congress is to provide effective oversight. Reprogramming can be an effective management tool if used as intended by the Congress. Reprogramming authority allows funds to flow to new priorities or can help complete projects when actual costs exceed original estimates. For example, the Corps revolving fund has used reprogramming authority to accommodate fluctuations between anticipated and actual bids of contractors on fixed asset acquisitions. Up to 10 percent of the funds within the Corps’ fixed asset categories can be diverted from one acquisition to another without prior approval by the Corps’ appropriations subcommittees. When reprogramming requires the subcommittees’ approval, informal relationships between Corps officials and congressional staff help the Corps receive a quick response to reprogramming requests. The Coast Guard has also taken advantage of reprogramming authority to respond to variances between estimated and actual costs for construction projects. Nevertheless, Coast Guard officials feel they are constrained in addressing some new and changing priorities because of limits on their reprogramming authority. (The Coast Guard needs its appropriations subcommittees’ approval to reprogram more than the lesser of $1 million or 15 percent of the total amount appropriated for a project and cannot reprogram between categories of appropriations in the Acquisition, Construction, and Improvements account (AC&I).) Officials at Coast Guard and NOAA expressed concern about the time involved in seeking reprogramming authority. Some of the time involved in reprogramming is due to obtaining approval within the agency, and it is unclear to what extent agencies inhibit use of reprogramming by designing cumbersome, internal procedures for requesting the authority. Agencies can also attempt to manage funding uncertainty by dividing multiyear capital projects into stand-alone stages that can be acquired and budgeted for separately. For example, Coast Guard acquisitions are sometimes structured as a base-year contract for a limited quantity of items with options to buy between a minimum and maximum quantity in future years. This structure permits the contractor to produce economically while acknowledging the inherent uncertainty of future funding levels. This acquisition strategy does not ensure that multiyear acquisitions will be completed as planned but attempts to balance agency desires for certainty with the Congress’ responsibility to allocate resources in a changing environment. With this strategy, the Congress indicates an initial agreement to the total purchase but still has the prerogative to fund less than the minimum quantity. Various features of an account—its congressional and executive review structures, its purpose, and the period for which its funds are available—can affect an agency’s ability to justify and make effective capital purchases. Each can influence how lawmakers view the trade-offs between types of capital spending or between capital and operations spending. Where certain account features seemed to discourage what case studies perceived to be prudent capital decisions, case studies sought other features, such as longer periods of funding availability and separate appropriations accounts for capital. Although certain account features may facilitate justifying or executing fixed asset purchases, case study officials stated that some types of asset purchases tend to be more difficult to support regardless of an account’s features. As a result, case studies have developed strategies unrelated to account features, such as more comprehensive budget justifications, to better explain capital needs. Congressional committee jurisdictions and executive organizational budget review structures have developed over time to fulfill a variety of needs and purposes. When these are different, a competitive conflict could arise. For example, FDA faces two different sets of competitors in the budget process. OMB includes FDA’s budget within the spending cap applied to FDA’s parent agency, the Department of Health and Human Services (HHS), even though FDA is not funded by the same appropriations subcommittee as HHS. As a result, during the administration’s budget formulation, FDA competes against other HHS programs which are not reviewed by FDA’s appropriations subcommittee. The difference in executive and congressional review structures might result in a proposed capital project being eliminated under one set of competitors when it might have survived amongst another set. Some capital expenditures can be more difficult to justify when funded from an account whose primary purpose differs from that of the capital spending request, such as a salaries and expense account that funds mostly operational expenditures. Most capital spending across the government occurs from accounts whose primary purpose is to fund capital assets. However, where dual-purpose accounts exist, they can distort the cost of capital in the budget year relative to other expenditures or affect perceptions of the capital spending’s acceptability. Dual-purpose accounts can also result in operating expenditures obscuring capital needs in some instances. Capital projects funded in accounts comprised largely of operating activities may seem more expensive than capital projects funded in other types of accounts in the budget year. This occurs because, when scoring outlays, accounts that contain mostly salaries and operating expenses have a first-year spend-out rate closer to 100 percent when capital expenditures have historically been a relatively small or sporadic component of the account’s spending. Conversely, accounts that have traditionally funded mostly capital expenditures receive a low, first-year spend-out rate that reflects the typical multiyear pattern of construction cash flows. For example, the Coast Guard’s Operating Expenses account has a first-year spend-out rate of 80 percent; the AC&I account has a first-year spend-out rate of 17 percent. When outlay constraints are tight and capital is a relatively small or nonrecurring expense, capital expenditures funded in operating accounts may yield higher first-year outlay estimates than capital expenditures in capital accounts and, therefore, may be less likely to be funded. On the other hand, the use of predominantly capital accounts with lower first-year spend-out rates can protect new construction when budgetary cuts are being made. A new $100 million construction project makes fewer outlays in the first-year, and thus can produce fewer outlay savings in that year, than a $100 million operating account. Therefore, a much larger amount of new construction budget authority would have to be cut to achieve a given amount of outlay savings than if operating funds were cut. Accordingly, when outlay savings are needed, capital accounts may have an advantage over operating accounts. Spend-out rates may also potentially affect the trade-offs between different types of capital expenditure when they are funded out of the same accounts but outlay at different rates. For example, PBS funds all capital expenditures from the same account, but each type of expenditure has a different outlay rate. Purchases of existing buildings have a 100 percent first-year outlay rate, repairs and alterations have a 20 percent rate, and new construction a 3 percent rate. The remaining outlays for repairs, alterations, and construction will be scored in subsequent years. While many factors, including future years’ outlays, affect how capital is acquired, outlay scoring would appear to make new construction considerably more attractive than buying an existing building. Though market conditions may make the purchase of existing buildings more economical than constructing new ones, the outlays of the former will be higher in the budget year. Likewise, repairs and alterations can initially appear more expensive than new construction. Extensive budget justifications showing the most effective use of capital are particularly important in such cases. The purpose of the account may also affect perceptions of the acceptability of capital expenditures. A congressional staff member explained that recent Treasury secretaries may have been reluctant to request funding to repair the Treasury building. The staff member opined that because such repairs would traditionally be funded from the Office of the Secretary’s discretionary budget account, the secretaries may have believed they would be criticized for increasing their office budgets. To make the purpose of the funding more readily apparent and to achieve a lower first-year spend-out rate for the repairs, the subcommittee created a separate account for Treasury repairs and maintenance in Treasury’s fiscal year 1996 appropriations act. Separate repairs and maintenance accounts were also created for the White House and the National Archives. Placing operating and capital expenses in a single account may help simplify oversight and can encourage agencies to take the initiative in making trade-offs between capital and operating expenditures. However, such dual-purpose accounts can hinder agencies’ capital requests when operating expenses are large enough to obscure capital needs. For example, USGS justifies the budget for its Surveys, Investigations, and Research account by program. Program line items generally represent USGS activities, such as water resources investigations, rather than the types of items USGS would like to fund, such as fixed assets. USGS officials believe this budget structure hides the increasing cost of scientific equipment by combining these expenditures with large program operating costs. Although combining capital and operating expenses in one account may hide some capital needs, agencies have other means to illuminate them. Budget justifications can be used to highlight capital needs and costs of alternatives if capital is not visible in the account structure. To help emphasize capital needs, USGS created a separate “digital mapping” modernization line item in its budget justification. In another instance, USGS explained to the Congress that leasing a mainframe computer would cost over 20 percent more than purchasing. Other case study officials feel separate capital accounts are needed to protect or raise the visibility of capital. The Coast Guard stated that its dedicated capital account has helped mitigate a crowding out of fixed asset acquisitions and has focused attention on capital. OMB proposed that PBS’ construction and acquisitions be placed in an account separate from the FBF to highlight the magnitude of these needs and to prevent them from crowding out repairs and alterations. However, a separate appropriations account for agency capital may inhibit collection and knowledge of the total costs of each of an agency’s programs. If capital appropriations are not charged back to managers’ budgets, capital may seem inexpensive and, thus, be used inefficiently. Segregating capital into a separate appropriation account may also discourage trade-offs between related capital and operating spending. However, such trade-offs can be promoted by the use of separate revolving funds for capital assets. Rather than relying on appropriations, revolving funds charge program managers for their use of capital assets, as discussed in chapter 3. Some agencies are able to justify acquisitions but may have difficulty executing them when funds expire before projects can be completed. Multiyear and no-year funding help agencies accommodate capital’s longer acquisition cycle. For example, Coast Guard and Corps construction projects generally need multiyear appropriations because their acquisition cycles can last several years. No-year funding is commonly provided through revolving funds. Through charges to users, revolving funds convert annual or multiyear appropriations into no-year funding that an agency can accumulate for large-scale acquisitions. All of our case studies had the opportunity to fund capital through multiyear appropriations or a revolving fund. However, even with multiyear funding, the period of availability may not always be appropriate. For example, the Congress and the Coast Guard have had difficulty agreeing on the period of fund availability that is long enough to complete the agency’s projects and short enough to discourage delays. The Congress has been fine-tuning the Coast Guard’s fund availability over the last several years. For fiscal year 1992, the Congress shortened the availability of shore, other equipment, and aircraft funds from 5 to 3 years to encourage quicker completion of projects. The House of Representatives Committee on Appropriations, Subcommittee on Transportation, reasoned that Coast Guard’s funding availability should be patterned after an agency that makes similar acquisitions, DOD, especially since DOD’s acquisitions are generally more complex. However, on some occasions in the past, Coast Guard officials have found it difficult to obligate funding for shore facilities within 3 years. Because shore projects are sometimes linked to vessel projects which have 5-year availability, vessel design changes could delay the obligation of shore funds. If a vessel project were delayed too long, funding for completion of the related shore facility could expire. In cases where the timing of one project affects another, it is important for the affected agency to work with its appropriations subcommittee to ensure that funds are available during the period needed. In addition, agencies with one-year appropriations cannot annually set aside and accumulate funds needed to make expensive fixed asset acquisitions. Prior to creating a WCF investment component, USGS had to fund all capital acquisitions with annually expiring appropriations. USGS had no ability to spread the cost of an expensive purchase over a number of years by saving some funds each year. Without a significant increase in appropriations, only relatively inexpensive purchases could be made. The Congress can maintain control over no- and multiyear funding through a variety of means. For example, the Congress encourages timely completion of projects and exercises control over the Coast Guard’s multiyear appropriations by requiring quarterly reports of progress on major acquisitions and by sometimes limiting funding of projects to stand-alone stages. Recent legislation may also help the Congress oversee the use of no- and multiyear funding governmentwide. The Federal Acquisition Streamlining Act requires that executive agency heads (1) set cost, performance, and schedule goals for major acquisition programs, (2) monitor the programs to ensure they are achieving, on average 90 percent of the established goals, and (3) take corrective actions, including termination, on programs that do not remain within the permitted tolerances. FASA also requires OMB to report to the Congress on agencies’ progress in meeting these cost, schedule, and performance goals. Regardless of any account features that affect capital—congressional or executive review structures, purpose, or period of availability—case study officials felt capital expenditures with less visible benefits are inherently more difficult to justify. Explaining the costs and benefits of less tangible assets is difficult, and the Congress may have more difficulty understanding the explanation. The Coast Guard and NOAA indicated that needs for visible, safety-related assets are easier to articulate than needs for information technology or research projects. Congressional staff generally agreed but noted that agencies sometimes poorly explain the need for information technology. Congressional staff acknowledged that spending for assets with visible and tangible benefits, such as new construction, may be favored over less visible assets, such as major modernization or repairs. However, some staff also perceived agencies as being unwilling to cut personnel costs to free funds for capital in general. Agency problems in justifying assets with administrative or intangible benefits emphasize the importance of adequate budget support for all capital asset acquisitions. Such support should include risk and cost-benefit analyses of alternative acquisition methods and show scenarios of long-run spending under various operating and capital spending levels. Inherently risky or intangible assets may require the agency to provide additional documentation or presentations to their appropriations subcommittees. Agencies and the Congress tend to take different sides on the question of whether agencies should retain proceeds from the sale of their assets. Officials at our case studies feel the ability to keep proceeds can provide the incentive needed to dispose of properties that are no longer needed or costly to maintain. Therefore, they would like to reinvest disposal proceeds in maintenance or acquisition of new assets. Some in the Congress are concerned that agencies might use asset sales as a means of skirting the appropriations process. Despite these concerns, the Congress allows some agencies, especially those with revolving funds, to retain asset sale proceeds. Our case studies illustrate that allowing agencies to retain disposal proceeds may be warranted under limited circumstances. The Congress has selectively determined which organizations or funds can keep disposal proceeds. Some revolving funds, such as those of the Corps and IFMS, are permitted to retain asset sale proceeds; but some, such as that of PBS, are not. Where assets have been acquired through appropriations, such as at the Coast Guard, agencies have usually not been permitted to keep sales proceeds. Whether they have revolving funds or receive appropriations, our case studies cite the inability to retain disposal proceeds as an impediment to capital acquisition and a disincentive for asset disposal. PBS officials cite the inability to obtain and keep proceeds from the sale of GSA properties as one factor that keeps the FBF from being self-sufficient. Any proceeds from asset sales must be deposited into a land and water conservation fund. PBS officials indicated that this can create a disincentive to dispose of less cost-effective properties. The other revolving funds operated by our case studies can retain disposal proceeds and have fewer restrictions on disposal of assets. Although the Corps considers disposal proceeds a minor source of funding, IFMS relies heavily upon proceeds from the sale of vehicles to sustain operations and keep rates competitive with the private sector. Similarly, Coast Guard officials were supportive of recently enacted legislation that allows the agency to keep proceeds from the sale of housing and reinvest them in maintenance or new housing. Coast Guard officials say the agency’s employees have difficulty finding affordable, local housing to rent in remote or resort areas and, therefore, the Coast Guard often needs to construct housing for them. The Coast Guard would like to be able to enhance its ability to meet new construction and repair needs by disposing of less important or less cost-effective properties and investing the proceeds in higher priority areas. Currently, the Coast Guard generally cannot dispose of one property in order to invest in another unless specifically provided by law. When housing property has been disposed of, proceeds have been returned to the Treasury. Recently enacted legislation establishes a Housing Improvement Fund for the Coast Guard. Appropriations and proceeds from the sale or lease of Coast Guard property or facilities would be deposited into the fund. The Coast Guard would be authorized to use the fund for acquiring housing to the extent provided in appropriations acts. If the Coast Guard is expected to maintain a constant level of housing, this authority appears appropriate because the Congress retains control and oversight, and proceeds can be used to reduce future appropriations requests. The Congress permits most agencies with revolving funds to keep proceeds from the disposal of assets but generally does not allow agencies that finance capital from appropriations to retain disposal proceeds. This dichotomy occurs because revolving funds are established for the business-type activities of the federal government and must retain some business-like tools if they are to be self-sustaining. Prohibiting a revolving fund from retaining disposal proceeds may impede the fund’s ability to cover all of its costs and encourage fund managers to seek additional sources of financing, such as appropriations or increased user charges. In contrast, agencies that acquire capital with appropriated funds do not retain disposal proceeds under most circumstances because they are expected to request appropriations for regular maintenance and replacement of assets. Under some conditions, revolving funds may not need to retain proceeds from the sale of assets. If a fund no longer needs to replace some assets, because of agency downsizing, for example, the proceeds may be more appropriately returned to the Treasury to reduce federal borrowing or to fund other needs instead of being spent by the fund. If the proceeds are relatively large, the Congress may wish to weigh the needs of the fund with the needs of other activities that could benefit from additional funding. In July 1994, OMB began an effort to identify issues related to planning and budgeting for fixed assets. This effort was spurred, in part, by National Performance Review (NPR) recommendations aimed at improving fixed-asset planning and budgeting. OMB requested information regarding agencies’ fixed-asset needs and concerns and used that information to assess governmentwide and agency-specific planning and budgeting practices. Responses to OMB’s request, which varied in completeness, revealed that agencies were using a variety of practices to plan and budget for fixed assets. The responses also provided OMB with insights into issues of concern, such as up-front funding. Up-front funding became the focus of OMB’s follow-up effort in 1995. As a result, the President proposed, for fiscal year 1997, full funding for several new and ongoing capital projects that otherwise would have been incrementally funded. For the fiscal year 1998 budget, OMB is requiring that agencies request full up-front funding for all capital acquisitions and that agencies show how their capital plans relate to the goals and plans of three performance-related initiatives—GPRA, the Federal Acquisition Streamlining Act of 1994 (FASA), and the Information Technology Management Reform Act of 1996 (ITMRA). OMB issued Bulletin 94-08, “Planning and Budgeting for the Acquisition of Fixed Assets” in July 1994 as an initiative to improve the acquisition of fixed assets. The Bulletin emphasized the importance of effective fixed-asset acquisitions in an era of declining resources. Restructuring and downsizing pressures may tempt agencies to forego or neglect fixed-asset acquisitions; but, certain purchases, such as information technology, may be critical in enabling agencies to do more with less. OMB also acknowledged that certain aspects of the budget process may exacerbate these tendencies. Recognizing many of the financing issues raised by our case studies, the Bulletin suggested that one-year funding may not allow sufficient time to complete the acquisition process, that one-time, large increases in appropriations requests for asset acquisitions (lumpiness) may make capital spending relatively less attractive, and that combining spending for capital and operating expenses in one account may crowd out fixed-asset purchases. The Bulletin emphasized that agency planning and budgeting, as well as OMB’s review process, must be improved. As a first step toward making such improvements, the Bulletin required agencies to prepare and justify 5-year spending plans for the acquisition of fixed assets and to conduct a review of funding mechanisms for fixed-asset purchases. The Bulletin stated that the 5-year plans would be used to develop the fiscal year 1996 President’s budget and to discuss fixed-asset acquisitions in the budget. Agency review of funding mechanisms was intended to assess the adequacy of current funding mechanisms for fixed assets and to consider whether the full cost of fixed-asset acquisitions was being recognized in budget requests. Agencies were asked to consider expanding the use of multiyear appropriations, asset acquisition accounts (either revolving fund or appropriation accounts), and other mechanisms that might alleviate funding difficulties. OMB received data from most agencies expected to respond to the Bulletin, but the completeness of the responses varied. OMB officials expected 14 executive branch agencies would respond to the Bulletin on the basis of previously reported spending on fixed assets. Of these 14, 4 did not respond. Conversely, OMB received responses from three agencies not expected to respond. All of our case studies responded to the Bulletin, but the content of their submissions varied. The Corps’ and USGS’ responses were limited because neither agency had many fixed-asset purchases that met the Bulletin’s reporting threshold. The Coast Guard and PBS used budget justifications and other previously prepared documents to support their 5-year plans and fulfill the Bulletin’s request for a description of the planning process. Of the 13 agencies responding to the Bulletin, only the Department of Veterans Affairs (VA) and the Coast Guard extensively discussed their evaluation of particular funding mechanisms for fixed asset purchases. VA’s response stated that “significant savings to the government could be realized if the type of acquisition was not determined prior to preparation of the budget.” Noting that economic conditions can change in the minimum of 3 years between budget preparation and appropriation, VA explained that the acquisition method initially selected may not be economically viable or ideal at the time of purchase. To address this situation, VA managers discussed creating a single real property acquisition account where space need and budget authority need are identified in the budget prospectus and the particular acquisition strategy is determined upon execution of the purchase. The Coast Guard discussed its ability to mitigate crowding out of fixed assets and its concern over the length of its fund availability. Funding both capital projects and the personnel needed to implement those projects as separate appropriation categories within a single account protects fixed-asset categories from competing with each other or non-capital expenditures. By forecasting and ranking long-term capital needs, the Coast Guard’s capital investment plan allows the agency to control the frequency with which large spikes in appropriations are needed. Funding spikes are also managed by dividing acquisitions into stand-alone stages or components that can be budgeted for separately and over a period of years. However, the Coast Guard stated that the 1- and 3-year availability of capital personnel and shore funding, respectively, was inadequate to accommodate mission changes. Officials of case study organizations indicated that they made no significant changes in their capital budgeting practices as a result of the Bulletin. These officials also did not perceive any differences in the way OMB viewed their budget request as a result of the Bulletin responses. However, PBS officials felt the Bulletin was a constructive step in acknowledging their concerns over scoring inconsistencies and encouraging their efforts to focus on multiyear financial planning and the type of space being acquired. The Bulletin also prompted PBS to begin to focus on the outlay impact of their capital acquisitions. Officials from our case studies generally felt the Bulletin response was easy to prepare because some fixed-asset data were being reported to OMB or the Congress in other formats. Officials from the Corps of Engineers, the Coast Guard, and USGS stated that the 5-year spending plans contained data that OMB or the Congress had previously seen in other reports. Therefore, these officials easily prepared Bulletin responses but thought the requirements were already being met through other submissions to OMB or Congress. For example, USGS had already provided detailed justification materials on its two purchases that met the Bulletin’s reporting threshold under other OMB mandates. An OMB official who helped develop the Bulletin acknowledged that the comprehensiveness of Bulletin responses varied but felt that the responses were useful in identifying issues for further consideration. This official speculated that the content and completeness of agency submissions may have been affected by the short time frame agencies had to respond and by the fact that agencies were being asked to supply fixed-asset data for the first time. Concerned with balancing its need for information and the agencies’ burden in supplying the information, OMB accommodated nonresponses through subsequent data requests by its program examiners. These requests and the formal Bulletin responses supported a narrative summary and 3-year table of “Fixed Asset Acquisitions” in the President’s fiscal year 1996 budget. The responses also supported the first-ever OMB Director’s review of fixed assets. Director’s reviews, at which the Director of OMB discusses and decides upon recommendations made by OMB examiners, are held on a limited number of topics each year. These discussions are significant because they can shape the content and presentation of the President’s budget. The Director’s review of fixed assets identified problems in planning and budgeting for fixed assets as well as mitigating strategies. The review found that some agencies lacked an integrated planning and budgeting process for fixed assets. For example, some agencies did not reflect operational changes that would occur from information technology acquisition in their long-range plans and budgets. Some agencies planned and budgeted for the acquisition of assets but did not fully plan and budget for related maintenance. The review also found that agencies were using a variety of account structures and strategies to justify fixed-asset acquisitions. Multiyear funding was widely used, especially for construction-related projects. Revolving funds were also widely used, although OMB did not receive any new requests for such funds. Some agencies tried to overcome difficulty in justifying large spending increases for capital by segregating all capital funding into one account to smooth annual changes in outlays and prevent the crowding-out of capital. Other agencies found that such accounts were not needed; spending increases for capital had been obtained when justified. However, the primary focus of the OMB Director’s review was up-front funding. Bulletin responses indicated that some capital spending was not fully funded. Specifically, capital projects of the Corps of Engineers, NASA, DOE, and the Bureau of Reclamation were incrementally funded. Some congressional staff acknowledged that such projects have traditionally been incrementally funded and indicated satisfaction with this practice. Until 1995, OMB explicitly permitted water resource projects to be incrementally funded. However, OMB is concerned that inconsistent scoring of fixed assets may unfairly bias some acquisitions and that incremental funding may understate the cost of acquisitions. In June 1995, OMB replaced Bulletin 94-08 with Bulletin 95-03. The two bulletins were nearly identical except that Bulletin 95-03 broadened the definition of fixed assets and added two reporting requirements. The definition of fixed assets was expanded to conform with the Federal Accounting Standards Advisory Board’s (FASAB) recommended definition of general property, plant, and equipment. In addition to assets meeting FASAB’s definition, space exploration facilities and equipment and all DOE facilities were deemed fixed assets for purposes of the Bulletin. As a result, agencies were to consider nearly all construction, major rehabilitation, and purchases of fixed assets owned by the federal government in completing the Bulletin’s reporting requirements. Bulletin 95-03 required agencies to provide information on the progress of acquisitions of $20 million or more and requested agencies to identify separable, stand-alone stages of fixed asset acquisitions. Information on acquisition progress was to be used to assess agencies’ progress in meeting the cost and schedule goals of their acquisitions as required by the FASA. Information on stages of fixed-asset acquisitions was to be used for identifying those separable, stand-alone phases of an acquisition that should be fully funded up-front. Bulletin 95-03 suggested what constituted separable, stand-alone phases for buildings and information technology, but asked agencies to identify such stages for other assets. Only eight agencies formally responded to all aspects of Bulletin 95-03. An OMB official attributed the low response partly to the lack of fiscal year 1996 appropriations for many agencies at the time submissions were due. However, the official noted that, as in 1994, OMB program examiners sought fixed-asset data from agencies when discussing overall budget requests. Therefore, OMB felt it had sufficient data to hold another Director’s review of fixed assets. This second-year review focused primarily on the extent to which agencies were requesting full up-front funding for capital projects and how to encourage such requests. Although most agencies were requesting full funding for capital projects, the review identified some large capital projects that were not fully funded and prompted OMB officials to encourage full up-front funding when discussing budget requests with agencies. OMB also determined that the discretionary spending caps on budget authority could accommodate full funding of some capital projects that would otherwise be incrementally funded. Full funding of these projects requires additional budget authority in the budget year but generally does not require additional outlays in the budget year. Because the sum of the President’s discretionary spending proposals was less than the discretionary spending caps, OMB was able to request $1.4 billion in budget authority in the President’s fiscal year 1997 budget to fully fund capital projects at the DOE and NASA. In addition, OMB presented budget schedules showing the cost to fully fund ongoing and new capital projects at the Corps of Engineers and the Bureau of Reclamation. Although full funding was not requested for these agencies’ capital projects, the schedules indicated the cost of fully funding ongoing and new projects for these agencies would be about $23 billion in fiscal year 1997, and OMB stated that efforts would be made to fully fund all new projects in the fiscal year 1998 budget. OMB officials felt that responses to Bulletins 94-08 and 95-03 helped them move from information gathering to the development of guidance regarding the implementation of full funding. To guide agencies in submitting their fiscal year 1998 budgets and to raise the visibility of its fixed-asset effort among agencies, OMB replaced these bulletins with a new Part 3 to OMB Circular A-11. Like the previous bulletins, the new Part 3 requires agencies to submit 5-year spending plans for major fixed-asset acquisitions and encourages agencies to consider the use of flexible funding mechanisms. In addition, it requires agencies to request full up-front funding for stand-alone stages of all ongoing and new fixed-asset acquisitions and outlines broad principles for planning and monitoring such acquisitions. Part 3 also attempts to streamline reporting requirements for three performance-related initiatives—FASA, ITMRA, and GPRA. OMB officials believe that FASA, ITMRA, and GPRA share the objective of its fixed-asset reviews—to improve fixed asset planning and budgeting. FASA requires that executive agency heads (1) set cost, performance, and schedule goals for major acquisition programs, (2) monitor the programs to ensure they are achieving, on average, 90 percent of the established goals, and (3) take corrective actions, including termination, on programs that do not remain within the permitted tolerances. FASA also requires OMB to report to the Congress on agencies’ progress in meeting these cost, schedule, and performance goals. ITMRA requires agency heads to establish goals for improving the efficiency and effectiveness of agency operations through effective use of information technology and to acquire information technology systems in successive acquisitions of interoperable increments. Under ITMRA, when the President submits the budget to the Congress, the OMB Director is to submit a report to the Congress on the net program performance benefits achieved as a result of agencies’ major information systems projects and on how the benefits of such projects relate to agencies’ goals. Under GPRA, agencies must develop, no later than by the end of fiscal year 1997, strategic plans that cover a period of at least 5 years and include the agency’s mission statement; identify the agency’s long-term strategic goals; and describe how the agency intends to achieve those goals through its activities and through its human, capital, information, and other resources. GPRA also requires each agency to submit to OMB, beginning for fiscal year 1999, an annual performance plan. In essence, the annual performance plan is to contain the annual performance goals the agency will use to gauge its progress toward accomplishing its strategic goals and identify the performance measures the agency will use to assess its progress. In issuing Part 3, OMB sought to centralize its information requests to fulfill FASA and ITMRA reporting requirements and to ensure that fixed-asset acquisition plans support the plans and goals developed for these initiatives and GPRA. Because the planning requirements of GPRA are not yet effective and have not yet been fully implemented, the new Part 3 of Circular A-11 requires agencies to describe how ongoing or proposed capital acquisitions relate to the agency’s mission and goals being defined under GPRA. It outlines broad principles for linking long-range planning and budgeting for fixed assets to the strategic and annual performance plans agencies develop for GPRA. For example, OMB advises agencies to develop long-range fixed-asset plans by ranking long-term goals and considering the most efficient and effective means of achieving those goals within budgetary constraints. Part 3 also urges agencies to monitor whether fixed-asset acquisitions are helping achieve their goals. While capital spending is important to efficient long-term government operations, a goal of the budget process should be to assist the Congress in allocating resources efficiently by ensuring that various spending options can be compared impartially—not necessarily to increase capital spending. The requirement of full up-front funding is an essential tool in helping the Congress make trade-offs among various spending alternatives. However, in an environment of constrained budgetary resources, agencies need tools that can help facilitate these trade-offs and that enable them to accommodate up-front funding. Furthermore, to successfully implement GPRA’s requirement for program performance measures, managers will also need to know the full costs of their programs—including capital usage. Some have recommended that the government adopt a full-scale capital budget, but this raises major budget control issues and may not be necessary to address agency-identified impediments to capital spending. Rather, our case studies demonstrate that more modest tools, such as revolving funds, investment components, and budgeting for stand-alone stages, can help accommodate up-front funding without raising the congressional or fiscal control issues of a separate capital budget. Though each of the strategies has limitations, when accompanied by good financial management and appropriate congressional oversight, they can be useful in facilitating effective capital acquisition within the current unified budget context. In addition, one strategy, using a revolving fund, can be effective in helping to make managers aware of the full cost of their programs. The budget process must balance several sometimes conflicting goals to facilitate effective trade-offs among various spending options. First, it is important that the budget process reveal the entire cost of operating particular programs—including the cost of capital assets used by the program. Knowledge of full program costs is especially significant as agencies and the Congress begin to implement GPRA’s requirements for performance measurement and budgeting. For example, if both capital and operating costs are not attributed to programs over time, programs may appear deceptively inexpensive. In addition, the cost of replacing assets is borne entirely by future agency managers and Congresses that may not have been responsible for asset consumption. Second, the budget process ought to enable lawmakers to compare the full, long-term costs of various spending alternatives. Thus, long-term commitments, such as purchases or lease-purchases, are scored up-front in the budget. Third, the Congress needs to be assured that agencies are spending funds as directed by law and be able to control total federal spending. Fourth, agencies need the flexibility and incentives to make economic decisions regarding capital acquisition and usage. Full up-front funding is one of the tools that has been important to facilitating fiscal control and comparisons of the long-term costs of spending alternatives. An essential part of prudent capital planning must be an adherence to full up-front funding. When full up-front funding is not practiced, the Congress risks committing the government to capital acquisitions without determining whether the project is affordable over the long-term. Incremental funding also compels future Congresses to fund a project in order to prevent wasting resources previously appropriated. As budgetary constraints continue, incremental funding may lock the Congress into future spending patterns and reduce flexibility to respond to new needs. In the budget process, fully funded projects may be disadvantaged in competition with incrementally funded projects—even when the fully funded projects actually cost less in the long-run. However, full up-front funding can impede agencies’ ability to economically acquire capital in an environment of resource constraints. Full up-front funding of relatively expensive capital acquisitions can consume a large share of an agency’s annual budget, thereby forcing today’s decision-makers to pay all at once for projects with long-lived benefits. While various capital budgeting proposals have been advanced to address this, the proposals themselves have raised significant concern because of their potential diminution of fiscal accountability and control. Consequently, agencies need financing tools that can provide the fiscal control of up-front funding and can enable them to make prudent capital decisions within the current unified budget frame work. Our case studies provide some examples of tools that can encourage effective capital decisions. Several use revolving funds to help accumulate resources for capital replacement and to help incorporate capital costs into program budgets. This will become of increasing importance as implementation of GPRA will require managers to know the full annual cost of their programs and to evaluate the performance of programs based on the full cost. Because revolving funds charge users for the cost of capital, managers have an incentive to regularly assess their need for and use of assets. By providing managers with a predictable stream of funding, revolving funds also encourage long-range capital planning. Our work indicates that revolving funds are most effective when (1) agencies have a sound record of financial management, (2) costs can be tracked to users, (3) replacement cost is recovered, (4) appropriations are available to fund significant or immediate expansions of the fund’s asset base, (5) proceeds from the disposal of fund assets are retained by the fund if the fund is expected to provide a constant level of service, and (6) used to finance small-scale, ongoing capital needs. Our case studies also indicate that revolving funds can provide varying degrees of congressional control. IFMS has few restrictions on the type of vehicles it can purchase; in contrast, the Congress approves every large purchase by the Corps’ and PBS’ revolving funds. Oversight by the Congress is important to ensuring that agency acquisitions are well-planned and justified and that the agency’s overall level of capital spending is appropriate given other competing capital and operating needs across the government. An investment component within a working capital fund generates many of the same benefits as revolving funds. In addition, an investment component may encourage agency managers to fund their voluntary contributions by making tradeoffs between capital and operational spending. Although the investment component is a recent development and used by only one of our case studies, it seems especially helpful for agencies that would otherwise fund capital with annually expiring funds. USGS’ investment component operates with few restrictions apart from prohibitions against building construction and using funds within 2 years of their placement in the investment component. However, expanding the use of an investment component to other agencies may require other limitations. For example, if several agencies obtain investment components and each decides to make large purchases in the same year, total outlays could rise sharply and cause a spike in the deficit. Therefore, OMB would need to manage all investment components to ensure total investment component outlays do not cause such spikes. The Congress must also be aware that an investment component may encourage agencies to build unobligated balances and that agencies would need to be held accountable to their investment plans. In addition to using revolving funds or an investment component, some case studies budget for stand-alone stages of capital acquisitions and use reprogramming authority. Budgeting for stand-alone stages makes capital acquisition affordable by limiting the budget authority needed at one time. It may also increase opportunities for oversight and permit adjustment of capital funding levels when other needs emerge. This tool can be used when parts of an acquisition can be useful without the whole being completed. If used as intended, reprogramming authority also helps agencies respond when changes in funding or mission leave inadequate funding to complete a capital acquisition or create new capital needs. Congressional control is maintained by limiting the amount of such authority. Multiyear and no-year funding help agencies accommodate capital’s longer acquisition cycle. All of our case studies had the opportunity to fund capital through multiyear appropriations or a revolving fund. However, agencies and the Congress must work together to find a period of fund availability that is long enough to complete the agency’s projects and short enough to discourage delays. The Congress can maintain control over no- and multiyear funding through individual agency reporting and FASA requirements. The strategies used by our case studies may not be all inclusive of those available to all federal agencies but are indicative of the kinds of tools agencies find useful. Some of these mechanisms, such as revolving funds and investment components, share to varying degrees common characteristics that help agencies make effective capital acquisitions. For example, They enable agencies to accumulate resources without fiscal year limitations in order to finance capital needs. They promote full costing of programs and activities by including costs related to capital usage in operating budgets. They provide a degree of predictability to funding levels that aids in long-range planning. In addition to considering the provision of tools with these characteristics, the Congress and OMB should continue to encourage agencies to improve capital planning. Three recent legislative initiatives—GPRA, FASA, and ITMRA—seek to improve agency planning for programs and capital acquisitions. OMB’s bulletins and guidance on fixed-asset planning and budgeting have been valuable contributions toward promoting agency capital planning. Also, given the governmentwide trend in downsizing, agencies may need to consider alternatives to ownership of capital assets. For example, agencies may purchase the use of assets through service contracts with private-sector organizations or other agencies. In other instances, agencies may need to explore creative ways of leveraging resources with the private sector, such as limited partnerships and loan guarantees, in order to meet their specific asset requirements. While agencies are concerned that the budget process facilitate capital acquisitions, it should be understood that agencies must ensure that capital projects are properly selected and well-managed. Flexible financing mechanisms and up-front funding can help to improve the chances that agencies can fully fund capital projects and will select financing methods that are most economical for the government. However, to ensure that funds are well used, it is imperative that agencies have a sound process for selecting which capital projects to fund and to manage those projects well. We have shown that many information technology projects undertaken by agencies have been poorly managed and wasted federal resources. Agencies could benefit from viewing capital projects—especially information technology—as investments that require explicit decision criteria and performance measures that assess risks, costs, and benefits. Long-range risks, costs, and benefits of various capital spending alternatives should be presented in budget justifications to the Congress. None of the budget tools discussed can be a substitute for good cost-benefit analysis and well-managed project implementation. GAO recommends that the Director of the Office of Management and Budget continue OMB’s top-level focus on fixed-asset acquisitions to include working with agencies and the Congress to promote flexible budgetary mechanisms that help agencies accommodate the consistent application of up-front funding requirements while maintaining opportunities for appropriate congressional oversight and control. As OMB continues to integrate GPRA requirements into the budget process, GAO recommends that the Director of the Office of Management and Budget, ensure that agencies’ capital plans flow from and are based upon their strategic and annual performance plans. In addition, OMB should continue its efforts to ensure that cost, schedule, and performance goals are monitored as required by FASA. Although requiring that budget authority for the full cost of acquisitions be provided before an acquisition is made allows the Congress to control capital spending at the time a commitment is made, it also presents challenges. Because the entire cost for these relatively expensive acquisitions must be absorbed in the annual budget of an agency or program, fixed assets may seem prohibitively expensive despite their long-term benefits. This report describes some strategies that a number of agencies have used to manage this dilemma. The Congress should consider enabling agencies to use more flexible budgeting mechanisms that accommodate up-front funding over the longer term while providing appropriate oversight and control. For agencies having proven financial management and capital planning capabilities and relatively small and ongoing capital needs, these techniques could include revolving funds and investment components. Such techniques enable agencies to accumulate resources over a period of years in order to finance certain capital needs, promote full costing of programs and activities by including costs related to capital usage in program budgets, and provide a degree of funding predictability to aid in long-range planning. As GPRA moves toward full implementation, these and other tools may take on increasing importance in helping managers and the Congress to identify program costs and to more efficiently manage capital assets. Officials from our case studies and OMB agreed with this report’s conclusions and recommendations. They also provided technical corrections which have been incorporated in this report where appropriate. In commenting on a draft of this report, OMB and GSA officials raised issues which required clarification and elaboration in some sections of the report. OMB officials agreed with the report’s support for up-front funding of capital assets but expressed concern that the use of intragovernmental revolving funds to fund capital acquisitions in some circumstances would undermine the up-front funding principle and reduce budgetary control. OMB proposed that a revolving fund could be used to fund relatively large, sporadic, or heterogeneous purchases if the revolving fund borrowed from Treasury and charged users to recover the principal and interest payments. This would facilitate congressional and executive review of such purchases while allocating capital costs to users. However, unless a relatively constant amount of capital spending is undertaken by the fund each year, such a revolving fund would cause a spike in budget authority each time an asset is purchased. Therefore, to clarify that revolving funds are not always appropriate for making capital acquisitions, references were added throughout the report to indicate their appropriateness for relatively small and ongoing capital needs. GSA officials expressed a desire for some discussion of proposed changes in scoring operating leases. Reference to previous GAO testimony on this matter was added to chapter 3. GSA officials also expressed their belief that congressional control could be maintained if the FBF retained proceeds from the disposal of PBS properties. The officials suggested that, because all funds deposited in the FBF must now be appropriated before use, the Congress would have an opportunity to determine how disposal proceeds should be used. This report provides observations on circumstances which affect whether agencies should retain proceeds, such as the need to provide a constant level of services. It was not intended to address whether such circumstances exist in any specific agency. Each agency’s situation would need to be assessed individually to select the appropriate financing mechanism and to determine how to handle disposal proceeds. Therefore, the report was not altered to address this comment.
Pursuant to a congressional request, GAO reviewed how the Army Corps of Engineers, the Coast Guard, the General Services Administration's Interagency Fleet Management System and Public Building Service, and the U.S. Geological Survey plan and budget for fixed assets, focusing on: (1) these agencies' perception of how the budget process affects their capital acquisitions; (2) whether there are funding mechanisms that might be helpful in planning and budgeting for fixed assets; and (3) the responses to the Office of Management and Budget's (OMB) Bulletin 94-08 on planning and budgeting for the acquisition of fixed assets. GAO found that: (1) the up-front funding requirement for the full cost of acquisitions allows Congress to control capital spending at the time a commitment is made and to better understand the future economic impact of its decisions; (2) officials at most of the agencies reviewed see up-front funding as problematic, since it requires the full cost of an asset to be absorbed in an agency's or program's annual budget, despite the fact that benefits may accrue over many years; (3) when combined with discretionary spending caps on agency and program budgets, the up-front funding requirement can make capital acquisitions seem prohibitively expensive; (4) a full-scale capital budget would raise major budget control issues and may not be necessary to address agency-identified impediments to capital spending; (5) several strategies can reduce the impact of the full funding requirement on agency budgets and help agencies accommodate the consistent application of up-front funding within the existing budget structure; (6) these strategies include budgeting for stand-alone stages of capital acquisitions, and using a revolving fund or an investment component in a working capital fund; (7) Congress has authorized agencies to accumulate budget authority for capital purchases over time; (8) some agencies have sought accounts dedicated to capital acquisitions, while others have sought additional authority to retain proceeds from capital asset sales; (9) some of these same problems and strategies surfaced as a result of an OMB effort to improve agencies' planning and budgeting for fixed assets; (10) the OMB review identified the full extent to which capital projects were not fully funded up front and led to OMB requesting $1.4 billion in fiscal year (FY) 1997 to fully fund some of these capital projects; and (11) new budget preparation instructions for FY 1998 require agencies to request full up-front funding for stand-alone stages of all ongoing and new fixed-asset acquisitions.
When considering areas in which to locate, RDA directs the heads of all executive departments and agencies of the government to establish and maintain departmental policies and procedures giving first priority to the location of new offices and other facilities in rural areas. Any move by an agency to new office space in another location would be considered a new office or facility covered by RDA. Two primary executive orders on federal facility location decisions are Executive Order 12072, Federal Space Management, dated August 16, 1978; and Executive Order 13006, Locating Federal Facilities on Historic Properties, dated May 21, 1996. Executive Order 12072 specifies that when the agency mission and program requirements call for federal facilities to be located in urban areas, agencies must give first consideration to locating in a central business area and adjacent areas of similar character. Executive Order 13006 requires the federal government to utilize and maintain, wherever operationally appropriate and economically prudent, historic properties and districts, especially those located in the central business area. In 1990, we reviewed whether federal agencies give rural areas first priority in location decisions as required by RDA and whether any changes in federal location policies were warranted. We reported that RDA had not been an important factor in federal facility location decisions. In fiscal year 1989, about 12 percent of federal civilian workers were located in nonmetropolitan statistical areas. Agency officials attributed mission requirements, the need to be in areas where the populations they serve are located, political considerations, and budget pressures as reasons why urban areas received more facilities than rural areas. Those agencies that did locate in rural areas said it was more because they served rural populations than because they were following the requirements of RDA. We also reported that a growing number of private sector corporations were moving to suburban and rural settings to take advantage of incentives offered by localities to attract jobs and the ability to separate functions resulting from changes in telecommunications technology. We concluded that there were multiple laws and regulations guiding federal agencies in selecting facility locations, but they do not always provide for consideration of the best financial interest of the government as a factor in the decision-making process. We recommended that GSA develop a more consistent and cost-conscious governmentwide location policy that would require agencies, in meeting their needs, to maximize competition and select sites that offer the best overall value considering such factors as real estate and labor costs. In 2001, we performed follow-up work on our 1990 report including identifying what functions lend themselves to being located in rural areas. We reported that since our 1990 study, federal agencies continued to locate for the most part in higher cost, urban areas. The percentage of federal employees located in nonmetropolitan statistical areas in 2000 remained virtually unchanged from 1989, at about 12 percent. Eight of the 13 cabinet agencies we surveyed had no formal RDA policy, and there was little evidence that agencies considered RDA’s requirements when locating new federal facilities. Further, GSA had not developed a cost-conscious, governmentwide location policy as we recommended in 1990 and the definition of rural used in RDA was unclear. We reported in 2001 that agencies chose urban areas for most (72 percent) of the 115 federal sites acquired from fiscal year 1998 through fiscal year 2000. Agencies said they selected urban areas primarily because of the need to be near agency clients and related government and private sector facilities to accomplish their missions. The agencies that selected rural areas said they did so because of lower real estate costs. Agencies that relocated operations tended to relocate within the same areas where they were originally located, which were mainly urban areas; newly established locations were almost equally divided between urban and rural areas. Private sector companies surveyed said they select urban areas over rural areas largely because of the need to be near a skilled labor force. Agencies said the benefits of locating in urban areas were efficiency in agency performance as a result of the ability to share existing facilities, close proximity to other agency facilities and employees, and accessibility to public transportation. Agencies that chose rural sites said that benefits included close proximity to agency support facilities, improved building and data security, and better access to major transportation arteries, such as interstate highways. Barriers reported for urban sites included the lack of building security and expansion space. For rural areas, barriers included the lack of public transportation, location far from other agency facilities, and insufficient infrastructure for high-speed telecommunications. The functions that were located predominantly at urban sites during 1998 through 2000 were loans/grants/benefits administration processing, inspection and auditing, and health and medical services. The functions that were located predominantly in rural areas in that period were research and development, supply and storage, automated data processing, and finance and accounting. Some functions, such as law enforcement, were placed in both urban and rural areas, although this particular function was located more often at urban sites. For our 2001 study, we contracted with a private sector consultant, John D. Dorchester, Jr., of The Dorchester Group, L.L.C., to assist us in a number of tasks. One task was to identify functions the private sector might locate in rural areas. The consultant identified the following functions: Accounting Account representative Appraisal/market research Clerical/secretarial Data processing Distribution/warehousing Education/training Enforcement and quality control Field service operations Human resources and social services Information technologies services Legal support Logistical support Manufacturing and assembly offices Operations centers Printing and publishing Records archiving Repairs and servicing Scientific studies and research and development Technical functions and support Telemarketing, order processing, and communications We also asked our consultant to identify the benefits and challenges associated with rural areas for selected functions. (See table 1.) Our July 2001 report suggested that Congress consider enacting legislation to (1) require agencies to consider real estate, labor, and other operational costs and local incentives when making a location decision; and (2) clarify the meaning of “rural area” in RDA. We also recommended that GSA revise its guidance to agencies to require agencies making location decisions to consider real estate, labor, and other costs and local incentives. In addition, we recommended that GSA require agencies subject to its authority to provide a written statement that they had given first priority to locating in a rural area and to justify their decision if they did not select a rural area. We also recommended that GSA define rural area until Congress amended RDA to define the term. Subsequent to our report, GSA took action on our recommendations; actions which are described in greater detail below. The Fiscal Year 2002 Treasury and General Government Appropriations Act, Public Law 107-67, required the inspectors general (IG) of departments and agencies to submit to the appropriations committees a report detailing what policies and procedures are in place requiring them to give first priority to the location of new offices and other facilities in rural areas, as directed by RDA. These reports were due in May 2002. A similar requirement was included in the Consolidated Appropriations Resolution for Fiscal Year 2003, Public Law 108-7. However, because the IGs had until August 20, 2003, to report on this, we did not have the opportunity to review those reports required by Public Law 108-7 for this testimony. GSA’s May 2, 2002, response to the Public Law 107-67 requirement described the policies that GSA had in place to give first priority to the location of new offices and other facilities in rural areas, as well as what actions GSA had taken in response to our July 2001 recommendations. GSA took the following actions: The Federal Management Regulation, section 102-83.30, was revised to require federal agencies to also consider real estate, labor, and other operational costs and applicable incentives in addition to mission and program requirements when locating space, effective December 13, 2002. The Public Buildings Service Customer Guide to Real Property was revised to require agencies to provide GSA with a written statement affirming that they have given first priority to locating in a rural area as required by RDA when requesting space from GSA. The Federal Management Regulation, section 102-83.55, effective December 13, 2002, was revised to define “rural area” as a city, town, or unincorporated area that has a population of 50,000 inhabitants or fewer, other than an urban area immediately adjacent to a city, town, or unincorporated area that has a population in excess of 50,000 inhabitants. GSA published a recommendation in the Federal Register on January 21, 2003, that federal agencies with their own statutory authority to acquire real property use the above definition of rural area and demonstrate compliance with RDA by including a written statement in their files affirming that they have given first priority to the location of new offices and other federal facilities in rural areas. These actions responded to all of our July 2001 recommendations with the exception of one. We had recommended that GSA require agencies, when selecting a new facility location, to provide a written statement that they had given first priority to locating in a rural area. If a rural area was not selected, agencies were to provide a justification for the decision. GSA’s new guidance does not require agencies not selecting a rural area to justify their decision. We also reviewed the IG reports detailing the policies and procedures in place regarding giving first priority to rural areas as required by Public Law 107-67 for the Departments of Energy, the Interior, Justice, Transportation, and Veterans Affairs. According to GSA data, these agencies, along with the Department of Defense and the United States Postal Service, have the largest amount of owned and leased building square footage in the federal government. We excluded sites acquired by the Defense Department because it has so much vacant space available at its bases nationally that it has no choice but to give priority consideration to its existing vacant space when locating new or existing operations. We excluded Postal Service sites because the Postal Service advised us it had little or no discretion in deciding where to locate most of its facilities in that they needed to be in specific locations to serve customers or near airports. In addition, the Postal Service is exempt from federal laws relating to contracts and property and it has authority to acquire space independently of GSA. The IG reports for the five departments said that only two departments had written policies regarding RDA, and only one of these two had issued procedures. However, the departments said that in spite of not having written policies or procedures, they had located many of their facilities in rural areas. The Energy IG reported that Energy had no specific policies or procedures, but it reported that a preponderance of the department’s activities are located in remote parts of the United States. The Interior IG reported that Department of the Interior and the U.S. Geological Survey, 1 of 35 bureaus and offices in the Department of the Interior, had policies regarding RDA. However, neither the department nor any of the bureaus and offices had procedures to ensure compliance with the policies. The IG reported that of the 270 locations established in the last 5 years, 197 (73 percent) were located in rural areas. The IG said that the decision to place facilities in rural areas was influenced by Interior’s mission rather than by the requirements of RDA. The Justice IG said Justice had no specific policy or procedures on RDA, but department bureaus, offices, boards, and divisions were instructed to implement all applicable federal regulations. The Justice IG cited the GSA regulation requiring agencies to give first priority to the location of new offices and other facilities in rural areas. The IG said it relies upon GSA for most of its space needs, and GSA is responsible for compliance with RDA. Further, the IG said the locations of its facilities are ultimately determined by mission and operational requirements, which predominantly require locations in major metropolitan areas. For example, U.S. Attorneys Offices and the U.S. Marshals Service need to be located near federal courthouses to accomplish their missions. The Bureau of Prisons is located in rural areas to decrease land costs and increase security. The Immigration and Naturalization Service is stationed in both urban and rural areas along the borders of the United States. The Federal Bureau of Investigation and the Drug Enforcement Administration are law enforcement agencies, and their missions and operational requirements determine the location of facilities. The IG also pointed out that the Federal Bureau of Investigation’s data center is located in a rural part of West Virginia. The Department of Transportation policy on RDA was the most complete of the agencies we reviewed in that Transportation has procedures that require a discussion of the considerations given to rural areas and requires an explanation if a rural location is not selected. However, the Transportation IG said the department does not provide any guidance on decision criteria or factors to be considered, such as cost-benefit analysis, access to public transportation, or effects of relocation on the workforce. Of 33 site location decisions made from October 1997 through February 2002, the Transportation IG found that 24 had no documentation in the files to indicate compliance with RDA. According to the Veterans Affairs IG, the department had no written policy or procedures regarding RDA. The IG said priority is given to locating new Veterans Health Administration medical care facilities in locations convenient to veteran patients and to collocating Veterans Benefits Administration regional offices on Veterans Affairs medical center grounds. Telework could be used to allow federal workers who live in rural areas to work in or near their homes, at least on a part-time basis. For over a decade, telework, also called telecommuting or flexiplace, has gained popularity because it offers the potential to benefit employers, including the federal government, by reducing traffic congestion and pollution, improving the recruitment and retention of employees, increasing productivity, and reducing the need for office space. Employees can benefit from reduced commuting time; lower costs for transportation, parking, food, and clothing; and a better balance of work and family demands, which could improve morale and quality of life. Other benefits might include removing barriers for those with disabilities who want to be part of the work force and helping agencies maintain continuity of operations in emergency operations. Congress has enacted legislation that has promoted the use of telework in several ways, including authorizing GSA telework centers, requiring each agency to consider using alternate workplace arrangements when considering whether to acquire space for use by employees, requiring each agency to establish a policy under which eligible employees may participate in telecommuting to the maximum extent possible, and encouraging the deployment of high-speed Internet access in rural areas. Congress has provided both GSA and OPM with lead roles and shared responsibilities for advancing telework in the federal government. Under the telework centers program, GSA supports 15 centers located in the Washington, D.C., metropolitan area. These centers make alternative office environments available to federal employees to perform their work at a site closer to their homes. According to a recent OPM report, federal agencies reported in November 2002 that about 90,000 employees, or about 5 percent of the workforce, were teleworking, compared with about 74,500, or 4.2 percent, reported in 2001. OPM reported that about 625,300 employees, or 35 percent of the federal workforce, were eligible to telework in 2002, and 68.5 percent of the total eligible federal workforce had been offered the opportunity to telework. In 2002, 14.4 percent of eligible employees teleworked. OPM reported that the rise in the number of teleworkers was due to a number of factors, including intensified efforts by agencies to encourage telework and a decline in management resistance to telework after training and education efforts. OPM did not report on the number of federal workers who resided in rural areas who were able to telework. We did not verify the accuracy of the OPM data. OPM reported a change in the ranking of major barriers to telework from an April 2001 survey of agencies to the November 2002 survey. As shown in table 2, security became the main barrier in 2002, replacing management resistance, which had been the main barrier in 2001. In July 2003 we reported on the federal government’s progress in implementing telework programs. We found that although OPM and GSA offer services and resources to encourage telework in the government, they have not fully coordinated their efforts and have had difficulty in resolving their conflicting views on telework-related matters. As a result, agencies have not always received consistent, inclusive, unambiguous support and guidance related to telework. We recommended that OPM and GSA improve the coordination of their efforts to provide federal agencies with enhanced support and guidance related to telework and to assist agencies in implementing 25 key practices we identified. After we discussed the issues created by the lack of coordination between GSA and OPM, a GSA official indicated that GSA and OPM would commit to improved coordination. The 25 key practices we identified by reviewing telework-related literature and guidelines that federal agencies should implement in developing telework programs are listed in table 3. We found that the four agencies we reviewed for that report, the Departments of Education and Veterans Affairs, GSA, and OPM, had implemented 7 of the 25 practices and had generally implemented the 5 practices relating to technology. Nevertheless, technological issues, such as not being able to access to high-speed Internet connections, could have a detrimental effect on the ability of some federal workers in rural areas to take advantage of telework. CRS reported this year on the ability of users to take advantage of high- speed, or broadband, Internet access. CRS reported that although many, but not all, offices and businesses now have Internet broadband access, a remaining challenge is providing broadband over “the last mile” to consumers in their homes. Congress has required the Federal Communications Commission (FCC) to determine whether advanced telecommunications capability is being deployed to all Americans in a reasonable and timely fashion and, if not, to take immediate action to accelerate deployment by removing barriers to infrastructure investment and by promoting competition in the telecommunications market. In August 2000, FCC concluded that advanced telecommunications capability was being deployed in a reasonable and timely fashion overall, although rural, minority, low-income, inner city, tribal, and U.S. territory consumers were particularly vulnerable to not receiving service in a timely fashion. In February 2002, FCC concluded that the deployment of advanced telecommunications capability to all Americans was reasonable and timely and investment in infrastructure for most markets remained strong, even though the pace of investment trends had slowed. According to CRS, about 85 percent of households have access to broadband. CRS also reported that the President’s Council of Advisors on Science and Technology concluded in December 2002 that although government should not intervene in the telecommunications marketplace, it should apply existing policies and promote government broadband applications and telework, among other actions. CRS also noted that much broadband legislation introduced in the 107th Congress sought to provide tax credits, grants, and/or loans for broadband deployment, primarily in rural and/or low income areas. It also noted that Public Law 107-171, the Farm Security and Rural Investment Act of 2002, authorized a loan and loan guarantee program to entities for facilities and equipment providing broadband service in eligible rural communities. The purpose of this legislation is to accelerate broadband deployment in rural areas. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information on this testimony, please contact Bernard L. Ungar on (202) 512-2834 or at ungarb@gao.gov. Key contributions to this testimony were made by John Baldwin, Frederick Lyles, Susan Michal- Smith, and Bill Dowdal. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The location of an organization's facilities has far reaching and long-lasting impacts on its operational costs and ability to attract and retain workers. The Rural Development Act of 1972 has required federal agencies to give first priority to locating new offices and other facilities in rural areas. Rural areas generally have lower real estate and labor costs, but agency missions often require locations in urban areas. Telework, also called telecommunicating or flexiplace, is a tool that allows employees to work at home or another work location other than a traditional office. Benefits of telework include reducing traffic congestion, improving the recruitment and retention of workers, and reducing the need for office space. Telework could allow federal workers who live in rural areas to work in or near their homes, at least some of the time. This testimony summarizes and updates work GAO has previously done on the progress in and barriers to the federal government's efforts to locate its operations and workers, when possible, in rural areas. Even though federal agencies have been required since 1972 to develop policies and procedures to give priority to locating new offices and other facilities in rural areas, this requirement has not been an important factor in location decisions. In September 1990 we reported that there were multiple laws and regulations to guide federal agencies in selecting facility locations, but they did not always provide for consideration of the best financial interest of the government as a factor in the decision-making process. In July 2001 we reported that many agencies had not issued policies and procedures to give rural areas priority when considering the location of new facilities. Only about 12 percent of federal workers were located in nonmetropolitan statistical areas, a percentage that remained unchanged from 1989 to 2000. Agencies said the need to be near clients, primarily in urban areas, dictated the location of most operations in urban areas. In spite of not having policies to give priority to rural areas, agencies sometimes locate their operations in rural areas to serve clients in those areas. Also, some functions, such as research and development, supply and storage, automated data processing, and finance and accounting, can be located in rural areas. Rural areas can offer lower real estate costs, improved security, reduced parking and traffic congestion problems, and better access to major transportation arteries. Potential barriers to locating in rural areas include the lack of public transportation, lack of available labor, location far from some other agency facilities, and sometimes insufficient infrastructure for high-speed telecommunications. In our July 2001 report, we made several recommendations to the General Services Administration and Congress to improve location decisionmaking. Congress and the General Services Administration subsequently took action to stress the requirements of the Rural Development Act. Congress has promoted telework in several ways, including authorizing of telework centers in the Washington, D.C., area, requiring agencies to establish a policy under which employees may participate in telecommuting to the maximum extent possible, and encouraging the development of high-speed Internet access in rural areas. However, only about 5 percent of the federal workforce is currently teleworking. In our July 2003 report, we recommended that the General Services Administration and the Office of Personnel Management improve their coordination and provide agencies with more consistent guidance on telework and assist agencies in implementing key practices we identified. The agencies generally agreed with our recommendations and committed to implement them. In addition, the Congressional Research Service reported in July 2003 that about 85 percent of U.S. households have broadband access, although rural, minority, low-income, inner city, tribal, and U.S. territory consumers are particularly vulnerable to not receiving this service. Technological barriers, such as the lack of access to high-speed Internet connections, could have a detrimental effect on the ability of some federal workers in rural areas to take advantage of telework.
In July 2002 President Bush issued the National Strategy for Homeland Security. The strategy set forth overall objectives to prevent terrorist attacks within the United States, reduce America’s vulnerability to terrorism, and minimize the damage and assist in the recovery from attacks that occur. The strategy further identified a plan to strengthen homeland security through the cooperation and partnering of federal, state, local, and private sector organizations on an array of functions. It also specified a number of federal departments, as well as nonfederal organizations, that have important roles in securing the homeland, with DHS having key responsibilities in implementing established homeland security mission areas. This strategy was updated and reissued in October 2007. In November 2002 the Homeland Security Act of 2002 was enacted into law, creating DHS. The act defined the department’s missions to include preventing terrorist attacks within the United States; reducing U.S. vulnerability to terrorism; and minimizing the damages and assisting in the recovery from attacks that occur within the United States. The act further specified major responsibilities for the department, including the analysis of information and protection of infrastructure; development of countermeasures against chemical, biological, radiological, and nuclear, and other emerging terrorist threats; securing U.S. borders and transportation systems; and organizing emergency preparedness and response efforts. DHS began operations in March 2003. Its establishment represented a fusion of 22 federal agencies to coordinate and centralize the leadership of many homeland security activities under a single department. We have evaluated many of DHS’s management functions and programs since the department’s establishment and have issued over 400 related products. In particular, in August 2007 we reported on the progress DHS had made since its inception in implementing its management and mission functions. We also reported on broad themes that have underpinned DHS’s implementation efforts, such as agency transformation, strategic planning, and risk management. Over the past 5 years, we have made approximately 900 recommendations to DHS on ways to improve operations and address key themes, such as to develop performance measures and set milestones for key programs and implement internal controls to help ensure program effectiveness. DHS has implemented some of these recommendations, taken actions to address others, and taken other steps to strengthen its mission activities and facilitate management integration. DHS has made progress in implementing its management and mission functions in the areas of acquisition, financial, human capital, information technology, and real property management; border security; immigration enforcement; immigration services; aviation, surface transportation, and maritime security; emergency preparedness and response; critical infrastructure and key resources protection; and science and technology. Overall, DHS made more progress in implementing its mission functions than its management functions, reflecting an initial focus on implementing efforts to secure the homeland. DHS has had to undertake these critical missions while also working to transform itself into a fully functioning cabinet department—a difficult undertaking for any organization and one that can take, at a minimum, 5 to 7 years to complete even under less daunting circumstances. As DHS continues to mature as an organization, we have reported that it will be important that it works to strengthen its management areas since the effectiveness of these functions will ultimately impact its ability to fulfill its mission to protect the homeland. Acquisition Management. DHS’s acquisition management efforts include managing the use of contracts to acquire goods and services needed to fulfill or support the agency’s missions, such as information systems, new technologies, aircraft, ships, and professional services. Overall, DHS has made progress in implementing a strategic sourcing program to increase the effectiveness of its buying power and in creating a small business program. However, DHS’s progress toward creating a unified acquisition organization has been hampered by various policy decisions. In September 2007 we reported on continued acquisition oversight issues at DHS, identifying that the department had not fully ensured proper oversight of its contractors providing services closely supporting inherently government functions. For example, we found that DHS program officials did not assess the risk that government decisions may be influenced by, rather than independent from, contractor judgments. Federal acquisitions policy requires enhanced oversight of contractors providing professional and management support services that can affect government decision making, support or influence policy development, or affect program management. However, most of the DHS program officials and contracting officers we spoke with were unaware of this requirement, and, in general, did not believe that their professional and management support service contracts required enhanced oversight. We made several recommendations to DHS to address these issues, including that DHS establish strategic-level guidance for determining the appropriate mix of government and contractor employees to meet mission needs; assess program office staff and expertise necessary to provide sufficient oversight of selected contractor services; and review contracts for selected services as part of the acquisition oversight program. Financial Management. DHS’s financial management efforts include consolidating or integrating component agencies’ financial management systems. In general, since its establishment, DHS has been unable to obtain an unqualified or “clean” audit opinion on its financial statements. For fiscal year 2007, the independent auditor issued a disclaimer on DHS’s financial statements and identified eight significant deficiencies in DHS’s internal controls over financial reporting, seven of which were so serious that they qualified as material weaknesses. DHS has taken steps to prepare corrective action plans for its internal control weaknesses by, for example, developing and issuing a departmentwide strategic plan for the corrective action plan process and holding workshops on corrective action plans. Until these weaknesses are resolved, DHS will not be in a position to provide reliable, timely, and useful financial data to support day-to-day decision making. Human Capital Management. DHS’s key human capital management areas include pay, performance management, classification, labor relations, adverse actions, employee appeals, and diversity management. Congress provided DHS with significant flexibility to design a modern human capital management system, and in October 2004 DHS issued its human capital strategic plan. DHS and the Office of Personnel Management jointly released the final regulations on DHS’s new human capital system in February 2005. Although DHS intended to implement the new personnel system in the summer of 2005, court decisions enjoined the department from implementing certain labor management portions of the system. DHS has since taken actions to implement its human capital system. In July 2005 DHS issued its first departmental training plan, and in April 2007, it issued its Fiscal Year 2007 and 2008 Human Capital Operational Plan. However, more work remains for DHS to fully implement its human capital system, including developing a market-based and performance-oriented pay system. corporate process for informed decision making by senior leadership about competing information technology investment options; applying system and software development and acquisition discipline and rigor when defining, designing, developing, testing, deploying, and maintaining systems; establishing a comprehensive, departmentwide information security program to protect information and systems; having sufficient people with the right knowledge, skills, and abilities to execute each of these areas now and in the future; and centralizing leadership for extending these disciplines throughout the organization with an empowered Chief Information Officer. DHS has undertaken efforts to establish and institutionalize the range of information technology management controls and capabilities noted above that our research and past work have shown are fundamental to any organization’s ability to use technology effectively to transform itself and accomplish mission goals. However, the department has significantly more to do before each of its management controls and capabilities is fully in place and is integral to how each system investment is managed. For example, in September 2007 we reported on our assessment of DHS’s information technology human capital plan. We found that DHS’s plan was largely consistent with federal guidance and associated best practices. In particular, the plan fully addressed 15 and partially addressed 12 of 27 practices set forth in the Office of Personnel Management’s human capital framework. However, we reported that DHS’s overall progress in implementing the plan had been limited. We recommended, among other things, that roles and responsibilities for implementing the information technology human capital plan and all supporting plans be clearly defined and understood. Moreover, DHS has not fully implemented a comprehensive information security program. While it has taken actions to ensure that its certification and accreditation activities are completed, the department has not shown the extent to which it has strengthened incident detection, analysis, and reporting and testing activities. Real Property Management. DHS’s responsibilities for real property management are specified in Executive Order 13327, “Federal Real Property Asset Management,” and include the establishment of a Senior Real Property Officer, development of an asset inventory, and development and implementation of an asset management plan and performance measures. In June 2006, the Office of Management and Budget (OMB) upgraded DHS’s Real Property Asset Management Score from red to yellow after DHS developed an Asset Management Plan, developed a generally complete real property data inventory, submitted this inventory for inclusion in the governmentwide real property inventory database, and established performance measures consistent with Federal Real Property Council standards. DHS also designated a Senior Real Property Officer. Border Security. DHS’s border security mission includes detecting and preventing terrorists and terrorist weapons from entering the United States; facilitating the orderly and efficient flow of legitimate trade and travel; interdicting illegal drugs and other contraband; apprehending individuals who are attempting to enter the United States illegally; inspecting inbound and outbound people, vehicles, and cargo; and enforcing laws of the United States at the border. DHS has made some progress in, for example, refining the screening of foreign visitors to the United States and providing training and personnel necessary to fulfill border security missions. In particular, as of December 2006 DHS had a pre-entry screening capability in place in overseas visa issuance offices and an entry identification capability at 115 airports, 14 seaports, and 154 of 170 land ports of entry. Furthermore, in November 2007 we reported on traveler inspections at ports of entry and found that U.S. Customs and Border Protection (CBP) had some success in identifying inadmissible aliens and other violators. However, we also identified weaknesses in CBP’s operations at ports of entry and have reported on challenges DHS faced in implementing its comprehensive border protection system, called SBInet, and in leveraging technology, personnel, and information to secure the border. For example, in our November 2007 report on traveler inspections, we identified weaknesses in CBP’s operations, including not verifying the nationality and admissibility of each traveler, which could increase the potential that terrorists and inadmissible travelers could enter the United States. In July 2007, CBP issued detailed procedures for conducting inspections, including requiring field office managers to assess compliance with these procedures. However, CBP had not established internal controls to ensure that field office managers share their assessments with CBP headquarters to help ensure that the new procedures were consistently implemented across all ports of entry and reduced the risk of failed traveler inspections. We recommended that DHS implement internal controls to help ensure that field office directors communicate to agency management the results of their monitoring and assessment efforts and formalize a performance measure for the traveler inspection program that identifies CBP’s effectiveness in apprehending inadmissible aliens and other violators. Immigration Enforcement. DHS’s immigration enforcement mission includes apprehending, detaining, and removing criminal and illegal aliens; disrupting and dismantling organized smuggling of humans and contraband as well as human trafficking; investigating and prosecuting those who engage in benefit and document fraud; blocking and removing employers’ access to undocumented workers; and enforcing compliance with programs to monitor visitors. Over the past several years, DHS has strengthened some aspects of immigration enforcement. For example, since fiscal year 2004 U.S. Immigration and Customs Enforcement (ICE) has reported increases in the number of criminal arrests and indictments for worksite enforcement violations. ICE also has begun to introduce principles of risk management into the allocation of its investigative resources. However, ICE has faced challenges in ensuring the removal of criminal aliens from the United States. The agency has also lacked outcome-based performance goals and measures for some its programs, making it difficult for the agency and others to fully determine whether its programs are achieving their desired outcomes. 2007, we reported on USCIS’s transformation efforts, noting that USCIS’s transformation plans partially or fully addressed most key practices for organizational transformations. For example, USCIS had taken initial steps in addressing problems identified during past efforts to modernize by establishing a Transformation Program Office that reports directly to the USCIS Deputy Director to ensure leadership commitment; dedicating people and resources to the transformation; establishing a mission, vision, and integrated strategic goals; focusing on a key set of priorities and defining core values; and involving employees. However, we found that more attention was needed in the areas of performance management, strategic human capital management, communications, and information technology management. We recommended that DHS document specific performance measures and targets, increase focus on strategic human capital management, complete a comprehensive communications strategy, and continue developing sufficient information technology management practices. Aviation Security. DHS’s aviation security mission includes strengthening airport security; providing and training a screening workforce; prescreening passengers against terrorist watch lists; and screening passengers, baggage, and cargo. Since the Transportation Security Administration (TSA) was established in 2001, it has focused much of its effort on aviation security and has developed and implemented a variety of programs and procedures to secure commercial aviation. For example, TSA has undertaken efforts to strengthen airport security; hire and train a screening workforce; prescreen passengers against terrorist watch lists; and screen passengers, baggage, and cargo. TSA has implemented these efforts in part to meet numerous mandates for strengthening aviation security placed on the agency following the September 11, 2001, terrorist attacks. However, DHS has faced challenges in developing and implementing a program to match domestic airline passenger information against terrorist watch lists; fielding needed technologies to screen airline passengers for explosives; and fully integrating risk-based decision making into some of its programs. In November 2007, we reported that TSA continued to face challenges in preventing unauthorized items from being taken through airport checkpoints. Our independent testing identified that while in most cases transportation security officers appeared to follow TSA’s procedures and used technology appropriately, weaknesses and other vulnerabilities existed in TSA’s screening procedures. Surface Transportation Security. DHS’s surface transportation security mission includes establishing security standards and conducting assessments and inspections of surface transportation modes, including passenger and freight rail, mass transit, highways, commercial vehicles, and pipelines. Although TSA initially focused much of its effort and resources on meeting legislative mandates to strengthen commercial aviation security after September 11, 2001, TSA has more recently placed additional focus on securing surface modes of transportation, including establishing security standards and conducting assessments and inspections of surface transportation modes such as passenger and freight rail. However, more work remains for DHS in developing and issuing security standards for all surface transportation modes and in more fully defining the roles and missions of its inspectors in enforcing security requirements. Maritime Security. DHS’s maritime security responsibilities include port and vessel security, maritime intelligence, and maritime supply chain security. DHS has developed national and regional plans for maritime security and response and a national plan for recovery, and it has ensured the completion of vulnerability assessments and security plans for port facilities and vessels. DHS has also developed programs for collecting information on incoming ships and working with the private sector to improve and validate supply chain security. However, DHS has faced challenges in implementing certain maritime security responsibilities including, for example, a program to control access to port secure areas and to screen incoming cargo for radiation. In October 2007, we testified on DHS’s overall maritime security efforts as they related to the Security and Accountability for Every (SAFE) Port Act of 2006. In that testimony we noted that DHS had improved security efforts by establishing committees to share information with local port stakeholders and taking steps to establish interagency operations centers to monitor port activities, conducting operations such as harbor patrols and vessel escorts, writing port-level plans to prevent and respond to terrorist attacks, testing such plans through exercises, and assessing security at foreign ports. We further reported that DHS had strengthened the security of cargo containers through enhancements to its system for identifying high-risk cargo and expanding partnerships with other countries to screen containers before they are shipped to the United States. However, we reported on challenges faced by DHS in its cargo security efforts, such as CBP’s requirement to test and implement a new program to screen 100 percent of all incoming containers overseas—a departure from its existing risk-based programs. Among our recommendations were that DHS develop strategic plans, better plan the use of its human capital, establish performance measures, and otherwise improve program operations. Emergency Preparedness and Response. DHS’s emergency management mission, now primarily consolidated in the Federal Emergency Management Agency (FEMA), includes prevention, mitigation, preparedness for, response to, and immediate recovery from major disasters and emergencies of all types, whether the result of nature or acts of man. The goal is to minimize damage from major disasters and emergencies by working with other federal agencies, state and local governments, nongovernment organizations, and the private sector to plan, equip, train, and practice needed skills and capabilities to build a national, coordinated system of emergency management. The Post-Katrina Emergency Management Reform Act of 2006 specifies a number of responsibilities for FEMA and DHS in the area of emergency preparedness and response designed to address many of the problems identified in the various assessments of the preparation for and response to Hurricane Katrina. It addresses such issues as roles and responsibilities, operational planning, capabilities assessments, and exercises to test needed capabilities. DHS has taken some actions intended to improve readiness and response based on our work and the work of congressional committees and the Administration. For example, in January 2008 DHS issued a revised National Response Framework intended to further clarify federal roles and responsibilities and relationships among federal, state, and local governments and responders, among others. However, these revisions have not yet been tested. DHS has also made structural changes in response to the Post-Katrina Emergency Management Reform Act that, among other things, are designed to strengthen FEMA. DHS has also announced a number of other actions to improve readiness and response. However, until states and first responders have an opportunity to train and practice under some of these changes, it is unclear what impact, if any, they will have on strengthening DHS’s emergency preparedness and response capabilities. stakeholders and information sharing and warning capabilities, and identifying and reducing threats and vulnerabilities. DHS has developed a national plan for critical infrastructure and key resources protection and undertaken efforts to develop partnerships and to coordinate with other federal, state, local and private sector stakeholders. DHS has also made progress in identifying and assessing critical infrastructure threats and vulnerabilities. For example, in July and October 2007 we reported on critical infrastructure sectors’ sector-specific plans. We reported that although nine of the sector-specific plans we reviewed generally met National Infrastructure Protection Plan requirements and DHS’s sector- specific plan guidance, eight plans did not address incentives the sectors would use to encourage owners to conduct risk assessments, and some plans were more comprehensive than others when discussing their physical, human, and cyber assets, systems, and functions. We recommended that DHS better (1) define its critical infrastructure information needs and (2) explain how the information will be used to attract more users. We also reported that the extent to which the sectors addressed aspects of cyber security in their sector-specific plans varied and that none of the plans fully addressed all 30 cyber security-related criteria. DHS officials said that the variance in the plans can primarily be attributed to the levels of maturity and cultures of the sectors, with the more mature sectors—sectors with preexisting relationships and a history of working together—generally having more comprehensive and complete plans than more newly established sectors without similar prior relationships. Regarding cyber security, we recommended a September 2008 deadline for sector-specific agency plans to fully address cyber- related criteria. Although DHS has made progress in these areas, it has faced challenges in sharing information and warnings on attacks, threats, and vulnerabilities and in providing and coordinating incident response and recovery planning efforts. For example, we identified a number of challenges to DHS’s Homeland Security Information Network, including its coordination with state and local information sharing initiatives. Science and Technology. DHS’s science and technology efforts include coordinating the federal government’s civilian efforts to identify and develop countermeasures to chemical, biological, radiological, nuclear, and other emerging terrorist threats. DHS has taken steps to coordinate and share homeland security technologies with federal, state, local, and private sector entities. However, DHS has faced challenges in assessing threats and vulnerabilities and developing countermeasures to address those threats. With regard to nuclear detection capabilities, in September 2007 we reported on DHS’s testing of next generation radiation detection equipment. In particular, we reported that the Domestic Nuclear Detection Office (DNDO) used biased test methods that enhanced the performance of the next generation equipment and that, in general, the tests did not constitute an objective and rigorous assessment of this equipment. We recommended that DNDO delay any purchase of this equipment until all tests have been completed, evaluated, and validated. Our work has identified cross-cutting issues that have hindered DHS’s progress in its management and mission areas. We have reported that while it is important that DHS continue to work to strengthen each of its core management and mission functions, it is equally important that these key issues be addressed from a comprehensive, departmentwide perspective to help ensure that the department has the structure and processes in place to effectively address the threats and vulnerabilities that face the nation. These issues are: (1) transforming and integrating DHS’s management functions; (2) engaging in effective strategic and transition planning efforts and establishing baseline performance goals and measures; (3) applying and improving a risk management approach for implementing missions and making resource allocation decisions; (4) sharing information with key stakeholders; and (5) coordinating and partnering with federal, state, local, and private sector agencies entities. In addition, accountability and transparency are critical to the department effectively integrating its management functions and implementing its mission responsibilities. DHS has faced an enormous management challenge in its transformation efforts as it works to integrate 22 component agencies. Each component agency brought differing missions, cultures, systems, and procedures that the new department had to efficiently and effectively integrate into a single, functioning unit. At the same time it has weathered these growing pains, DHS has had to fulfill its various homeland security and other missions. DHS has developed a strategic plan, is working to integrate some management functions, and has continued to form necessary partnerships to achieve mission success. Nevertheless, in 2007 we reported that DHS’s implementation and transformation remained high-risk because DHS had not yet developed a comprehensive management integration strategy and its management systems and functions⎯especially related to acquisition, financial, human capital, and information management⎯were not yet fully integrated and wholly operational. We identified that this array of management and programmatic challenges continued to limit DHS’s ability to carry out its roles under the National Strategy for Homeland Security in an effective, risk-based way. We have recommended, among other things, that agencies on the high-risk list produce a corrective action plan that defines the root causes of identified problems, identifies effective solutions to those problems, and provides for substantially completing corrective measures in the near term. Such a plan should include performance metrics and milestones, as well as mechanisms to monitor progress. OMB has stressed to agencies the need for corrective action plans for individual high-risk areas to include specific goals and milestones. GAO has said that such a concerted effort is critical and that our experience has shown that perseverance is critical to resolving high-risk issues. In the spring of 2006, DHS provided us with a draft corrective action plan that did not contain key elements we have identified as necessary for an effective corrective action plan, including specific actions to address identified objectives. As of February 2008, DHS had not yet completed a corrective action plan. According to DHS, the department plans to use its revised strategic plan, which is at OMB for final review, as the basis for its corrective action plan. The significant challenges DHS has experienced in integrating its disparate organizational cultures and multiple management processes and systems make it an appropriate candidate for a Chief Operating Officer/Chief Management Officer (COO/CMO) as a second deputy position or alternatively as a principal undersecretary for management position. Designating the Undersecretary for Management at DHS as the CMO at an Executive Level II is a step in the right direction, but this change does not go far enough. A COO/CMO for DHS with a limited term that does not transition across administrations will not help to ensure the continuity of focus and attention needed to protect the security of our nation. A COO/CMO at the appropriate organizational level at DHS, with a term appointment, would provide the elevated senior leadership and concerted and long-term attention required to marshal its transformation efforts. As part of its transformation efforts, it will be especially important for the department to effectively manage the approaching transition between administrations and sustain its transformation through this transition period. Due to its mission’s criticality and the increased risk of terror attacks during changes in administration as witnessed in the United States and other countries, it is important that DHS take steps to help ensure a smooth transition to new leadership. According to the Homeland Security Act of 2002, as amended, DHS is required to develop a transition and succession plan to guide the transition of management functions to a new Administration by December 2008. DHS is working to develop and implement plans and initiatives for managing the transition. Moreover, the Homeland Security Advisory Council issued a report in January 2008 on the pending transition, making recommendations in the broad categories of threat awareness, leadership, congressional oversight/action, policy, operations, succession, and training. DHS is taking action to address some challenges of the approaching transition period, including filling some leadership positions traditionally held by political appointees with career professionals. The department is also undertaking training and cross- training of senior career personnel that would address the council’s concerns for leadership and operational continuity. However, some other Homeland Security Advisory Council recommendations, such as building a consensus among current DHS officers regarding priority policy issues, could prove more difficult for DHS to implement, particularly in light of the need to clarify roles and responsibilities across the department and its ongoing transformation efforts. Strategic planning is one of the critical factors necessary for the success of new organizations. This is particularly true for DHS, given the breadth of its responsibility and the need to clearly identify how stakeholders’ responsibilities and activities align to address homeland security efforts. However, DHS has not always implemented effective strategic planning efforts and has not yet fully developed performance measures or put into place structures to help ensure that the agency is managing for results. DHS has developed performance goals and measures for some of its programs and reports on these goals and measures in its Annual Performance Report. However, some of DHS’s components have not developed adequate outcome-based performance measures or comprehensive plans to monitor, assess, and independently evaluate the effectiveness of their plans and performance. Since the issuance of our August 2007 report, DHS has begun to develop performance goals and measures for some areas in an effort to strengthen its ability to measures its progress in key management and mission areas. We commend DHS’s efforts to measure its progress in these areas and have agreed to work with the department to provide input to help strengthen established measures. DHS cannot afford to protect everything against all possible threats. As a result, the department must make choices about how to allocate its resources to most effectively manage risk. Risk management has been widely supported by the President and Congress as a management approach for homeland security, and the Secretary of Homeland Security has made it the centerpiece of departmental policy. A risk management approach can help DHS make decisions more systematically and is consistent with the National Strategy for Homeland Security and DHS’s strategic plan, which have all called for the use of risk-based decisions to prioritize DHS’s resource investments regarding homeland security-related programs. DHS and several of its component agencies have taken steps toward integrating risk-based principles into their decision-making processes. On a component agency level, the Coast Guard, for example, has developed security plans for seaports, facilities, and vessels based on risk assessments. TSA has also incorporated risk-based decision making into a number of its programs, such as programs for securing air cargo, but has not yet completed these efforts. risk management to homeland security and suggested a number of ways to use risk communication practices to better educate and inform the public. The participants also proposed a number of steps that could be taken in the near future to strengthen risk management practices and to stimulate public discussion and awareness of risk management concepts. We are working with the department to share ideas raised at the forum to assist them as they work to strengthen their risk-based efforts. We will be issuing a summary of this forum in the coming months. In 2005, we designated information sharing for homeland security as high- risk and continued that designation in 2007. In doing so, we reported that the nation had not implemented a set of governmentwide policies and processes for sharing terrorism-related information but had issued a strategy on how it would put in place the overall framework, policies, and architecture for sharing with all critical partners—actions that we and others have recommended. The Intelligence Reform and Terrorism Prevention Act of 2004, as amended, requires that the President create an “information sharing environment” to facilitate the sharing of terrorism- related information, yet this environment remains in the planning stage. An implementation plan for the environment, which was released in November 2006, defines key tasks and milestones for developing the information sharing environment, including identifying barriers and ways to resolve them, as we recommended. We have noted that completing the information sharing environment is a complex task that will take multiple years and long-term administration and congressional support and oversight and will pose cultural, operational, and technical challenges that will require a collaborated response. DHS has taken some steps to implement its information sharing responsibilities and support other information sharing efforts. For example, states and localities are creating their own information fusion centers, some with DHS support. In October 2007 we reported that some state and local fusion centers had DHS personnel assigned to them; access to DHS’s unclassified information networks or systems, such as the Homeland Security Information Network; and support from DHS grant programs. However, some state and local fusion centers reported challenges to accessing DHS’s information systems and identified issues in understanding and using federal grant funds. To improve efforts to create a national network of fusion centers, we recommended that the federal government determine and articulate its role in, and whether it expects to provide resources to, fusion centers over the long term to help ensure their sustainability. To secure the nation, DHS realizes it must form effective and sustained partnerships among its component agencies and with a range of other entities, including other federal agencies, state and local governments, private and nonprofit sectors, and international partners. The National Strategy for Homeland Security recognizes the importance of partnerships as the foundation for establishing a shared responsibility for homeland security among stakeholders. We have reported on difficulties faced by DHS in its coordination efforts. For example, in September 2005 we reported that TSA did not effectively involve private sector stakeholders in its decision-making process for developing security standards for passenger rail assets. We recommended that DHS develop security standards that reflect industry best practices and can be measured, monitored, and enforced by TSA rail inspectors and, if appropriate, rail asset owners. DHS agreed with these recommendations. DHS has worked to strengthen partnerships and has undertaken a number of coordination efforts with public and private-sector entities. These include, for example, partnering with the Department of Transportation to strengthen the security of surface modes of transportation, airlines to improve aviation passenger and cargo screening, the maritime shipping industry to facilitate containerized cargo inspection, and the chemical industry to enhance critical infrastructure protection at such facilities. In addition, FEMA has worked with other federal, state, and local entities to improve planning for disaster response and recovery. Although DHS has taken action to strengthen partnerships and improve coordination, we found that more work remains to support the leveraging of resources and the effective implementation of its homeland security responsibilities. Accountability and transparency are critical to the department effectively integrating its management functions and implementing its mission responsibilities. We have reported that it is important that DHS make its management and operational decisions transparent enough so that Congress can be sure that it is effectively, efficiently, and economically using the billions of dollars in funding it receives annually. We have encountered delays at DHS in obtaining access to needed information, which has impacted our ability to conduct our work in a timely manner. Since we highlighted this issue last year to this subcommittee, our access to information at DHS has improved. For example, TSA has worked with us to improve their process for providing us with access to documentation. DHS also provided us with access to its national level preparedness exercise. However, we continue to experience some delays in obtaining information from DHS, and we continue to believe that DHS needs to make systematic changes to its policies and procedures for how DHS officials are to interact with GAO. We appreciate the Subcommittee’s assistance in helping us seek improved access to DHS information and support the provision in the Consolidated Appropriations Act, 2008, that restricts a portion of DHS’s funding until DHS reports on revisions to its guidance for working with GAO and the DHS IG. We look forward to collaborating with the department on proposed revisions to its GAO guidance. Next month DHS will be 5 years old, a key milestone for the department. Since its establishment, DHS has had to undertake actions to secure the border and the transportation sector and defend against, prepare for, and respond to threats and disasters while simultaneously working to transform itself into a fully functioning cabinet department. Such a transformation is a difficult undertaking for any organization and can take, at a minimum, 5 to 7 years to complete even under less daunting circumstances. Nevertheless, DHS’s 5-year anniversary provides an opportunity for the department to review how it has matured as an organization. As part of our broad range of work reviewing DHS management and mission programs, we will continue to assess in the coming months DHS’s progress in addressing high-risk issues. In particular, we will continue to assess the progress made by the department in its transformation and information sharing efforts. Further, as DHS continues to evolve and transform, we will review its progress and performance and provide information to Congress and the public on its efforts. This concludes my prepared statement. I would be pleased to answer any questions you and the Subcommittee Members may have. For further information about this testimony, please contact Norman J. Rabkin, Managing Director, Homeland Security and Justice, at 202-512- 8777 or rabkinn@gao.gov. Other key contributors to this statement were Jason Barnosky, Cathleen A. Berrick, Kathryn Bolduc, Anthony Cheesebrough, Rebecca Gambler, Kathryn Godfrey, Christopher Keisling, Thomas Lombardi, Octavia Parks, and Sue Ramanathan. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Homeland Security (DHS) began operations in March 2003 with missions that include preventing terrorist attacks from occurring within the United States, reducing U.S. vulnerability to terrorism, minimizing damages from attacks that occur, and helping the nation recover from any attacks. GAO has reported that the implementation and transformation of DHS is an enormous management challenge and that the size, complexity, and importance of the effort make the challenge especially daunting and critical to the nation's security. GAO's prior work on mergers and acquisitions found that successful transformations of large organizations, even those faced with less strenuous reorganizations than DHS, can take at least 5 to 7 years to achieve. This testimony is based on GAO's August 2007 report evaluating DHS's progress between March 2003 and July 2007, selected reports issued since July 2007, and our institutional knowledge of homeland security issues. Since its establishment, DHS has made progress in implementing its management and mission functions in the areas of acquisition, financial, human capital, information technology, and real property management; border security; immigration enforcement and services; aviation, surface transportation, and maritime security; emergency preparedness and response; critical infrastructure protection; and science and technology. In general, DHS has made more progress in its mission areas than in its management areas, reflecting an initial focus on protecting the homeland. While DHS has made progress in implementing its functions in each management and mission area, we identified challenges remaining in each of these areas. These challenges include providing appropriate oversight for contractors; improving financial management and controls; implementing a performance-based human capital management system; implementing information technology management controls; balancing trade facilitation and border security; improving enforcement of immigration laws, enhancing transportation security; and effectively coordinating the mitigation and response to all hazards. Key issues that have affected DHS's implementation efforts are agency transformation, strategic planning and results management, risk management, information sharing, partnerships and coordination, and accountability and transparency. For example, GAO designated DHS's implementation and transformation as high-risk. While DHS has made progress in transforming its component agencies into a fully functioning department, it has not yet addressed key elements of the transformation process, such as developing a comprehensive transformation strategy. The Homeland Security Act of 2002, as amended, requires DHS to develop a transition and succession plan to guide the transition of management functions to a new Administration; DHS is working to develop and implement its approach for managing the transition. DHS has begun to develop performance goals and measures in some areas in an effort to strengthen its ability to measure its progress in key areas. We commend DHS's efforts and have agreed to work with the department to provide input to help strengthen established measures. DHS also has not yet fully adopted and applied a risk management approach in implementing its mission functions. Although some DHS components have taken steps to do so, this approach has not yet been implemented departmentwide. DHS's 5-year anniversary provides an opportunity for the department to review how it has matured as an organization. As part of our broad range of work reviewing DHS's management and mission programs, GAO will continue to assess DHS's progress in addressing high-risk issues. In particular, GAO will continue to assess the progress made by the department in its transformation and information sharing efforts.
Since 1824, the Corps has been responsible for maintaining a safe, reliable, and economically efficient navigation system in the United States. This system currently comprises more than 12,000 miles of inland and intracoastal waterways and about 180 ports handling at least 250,000 tons of cargo per year. The accumulation of sediment in waterways— known as shoaling—reduces their navigable depth and, without dredging, may result in restrictions on vessels passing through the waterways. These restrictions often apply to the vessels’ draft—the distance between the surface of the water and the bottom of the hull—which determine, in part, the minimum depth of water in which a vessel can safely navigate. Draft restrictions may result in delays and added costs as ships may need to off-load some of their cargo to reduce their draft, wait until high tide or until waterways are dredged, or sail into another port. For example, according to a 2011 Corps study, 1 foot of shoaling in the lower Mississippi River could result in $2.8 billion worth of cargo being disrupted annually. To minimize such risks to navigation, the Corps removed an annual average of about 229 million cubic yards of material from U.S. waterways from fiscal year 2003 through fiscal year 2012, at an average annual cost of about $1.1 billion, according to the Corps. Even with these efforts, draft restrictions have regularly been in place on major waterways throughout the United States in the past several years, according to Corps documents and officials. The Corps contracts with industry to perform most dredging, including work done by hopper dredges. According to the Corps, of the approximately $11 billion it spent for dredging from fiscal year 2003 through fiscal year 2012, about $2.37 billion was for hopper dredging. Of that, industry hopper dredges accounted for about $1.8 billion, and Corps hopper dredges accounted for about $570 million. Corps spending on hopper dredging has more than doubled since fiscal year 2003, while the amount of material removed by hopper dredges has increased only slightly over that period, according to Corps data. Specifically, as shown in figure 2, the Corps spent nearly $170 million for Corps and industry hopper dredges to remove around 66 million cubic yards of material in fiscal year 2003. By fiscal year 2012, Corps spending on Corps and industry hopper dredging had increased to about $370 million, while the amount of material removed increased to nearly 72 million cubic yards. This growth in spending reflects costs for hopper dredging that, according to Corps documents, have increased because of rising costs for fuel and steel, among other factors. Hopper dredging today is generally performed in three regions of the United States—the East Coast, Gulf Coast, and West Coast—and each region has at least one Corps hopper dredge that typically operates in it: the McFarland on the East Coast, the Wheeler on the Gulf Coast, and the Essayons and Yaquina on the West Coast. On the East and Gulf Coasts, the majority of the hopper dredging workload is carried out by industry dredges, while on the West Coast, Corps dredges remove more than half of the dredged material. Various factors can influence and complicate hopper dredging in each region. For example, on the East Coast, much of the hopper dredging must be performed during certain months of the year because environmental restrictions related to endangered sea turtles and other species prohibit dredging while those species are present. On the West Coast, the Corps must factor in the time and expense of moving industry dredges through the Panama Canal if the only available industry hopper dredges are on the East or Gulf Coasts. The sizes and capabilities of specific hopper dredges—and, therefore, the projects for which they are suited—vary. For instance, shallow ports and harbors cannot be dredged by vessels with deep drafts in many cases. The Corps uses the Yaquina, which is a small dredge with a draft of around 15 feet when its hopper is fully loaded, for dredging small and shallow ports along the California, Oregon, and Washington coasts. In contrast, the Corps uses the Wheeler, which is a large dredge with a draft of nearly 30 feet when its hopper is fully loaded, for deeper navigation channels such as those in the lower Mississippi River. See appendix II for a list of Corps and industry hopper dredges and their characteristics. As noted, several pieces of legislation were enacted that sought to increase the role of industry in hopper dredging by placing restrictions on the use of the Corps’ hopper dredges. More specifically, in 1978, legislation directed the Corps to contract out much of its hopper dredging work to industry and reduce the Corps’ fleet to the minimum necessary to insure the capability of the federal government and industry together to carry out projects for the improvement of rivers and harbors. The Energy and Water Development Appropriations Act for fiscal year 1993, and subsequent appropriations acts in the early 1990s, required the Corps to offer for competitive bidding at least 7.5 million cubic yards of hopper dredging work previously performed by the federal fleet. The Corps addressed this requirement by reducing the use of each of its four dredges from about 230 workdays per year to about 180 workdays per year. The Water Resources Development Act of 1996 then required the Corps to take the Wheeler out of active status and place it into ready reserve. The Corps implemented this requirement beginning in fiscal year 1998 by generally limiting the Wheeler to working 55 days a year plus any urgent or emergency work. More recently, the Water Resources Development Act of 2007 required that the Corps place the McFarland in ready reserve and limited the use of the vessel to 70 working days per year in the Delaware River and Bay, plus any urgent and emergency work. See table 1 for the statutory restrictions in place on the use of the Corps’ hopper dredges and how they have changed since fiscal year 2003. The Corps follows a process—known as the raise the flag procedure—for activating its ready reserve dredges to respond to urgent or emergency dredging needs. The Corps defines an urgent need for dredging as a time-sensitive situation that may require prompt action for providing a safe navigation channel, and an emergency as a situation that would result in an unacceptable hazard to life, a significant loss of property, or an immediate, unforeseen, and significant economic hardship if corrective action is not undertaken within a time period less than the normal contract procurement process. The raise the flag procedure includes a series of steps intended to allow industry the opportunity to respond to urgent or emergency dredging needs before the Corps uses its own dredges. The Corps district office with an urgent or emergency dredging need notifies the Corps division office overseeing it of the dredging need, and district and division staff review ongoing hopper dredging work under existing Corps contracts to see if any industry hopper dredges could be made available. If no industry hopper dredges could be made available, the offices notify Corps headquarters. The Corps’ Director of Civil Works may then decide whether to use one of the Corps’ ready reserve hopper dredges or make additional efforts to procure an industry dredge, such as by releasing a dredge from an existing contract. The Corps contracts for most of the hopper dredging work by soliciting competitive bids from industry. To determine the reasonableness of contractor bids, the Corps develops a government cost estimate for its hopper dredging solicitations. Government cost estimates are developed using information on the costs of owning and operating hopper dredges— including acquisition, fuel, and shipyard costs—along with information on the project for which the dredging is needed—including the amount and type of material to be removed, and the distance from the dredging site to the placement site. In soliciting bids from contractors, the Corps most commonly uses a sealed-bid process, through which it generally awards the contract to the lowest bidder with a bid that is no more than 25 percent above the government cost estimate.receive any bids or if all bids exceed the government cost estimate by more than 25 percent, the Corps may pursue a number of options, including (1) negotiating with bidders to get the bid within an awardable range of the cost estimate; (2) reviewing the cost estimate and revising it based on additional information, as appropriate, or (3) performing the work itself such as through its raise the flag procedure. If the Corps does not The costs to own and operate hopper dredges include costs such as payroll for the crews, fuel, repairs, and depreciation. Hopper dredging requires large capital outlays—a modern hopper dredge comparable in size to the Wheeler, for instance, would cost around $100 million to build, according to Corps and industry estimates—and related costs such as depreciation and replacement of engines or other major equipment can represent a relatively large portion of the dredges’ total costs. The Corps and industry incur much of the costs for their hopper dredges—such as paying a crew and keeping engines and other systems in ready working condition—regardless of how much the dredges are used. The Corps uses two funding sources from its annual civil works appropriation to pay for its hopper dredges. First, for the ready reserve vessels McFarland and Wheeler, funds are provided for each dredge to cover their costs while they are idle in ready reserve. Second, the Corps pays for the use of its dredges with project funds based on a daily rate it establishes for its dredges. According to Corps officials, the Corps sets a daily rate specific to each of its hopper dredges at least annually, based on factors such as the costs of owning and operating the dredge, and the amount of work the dredge is expected to perform. As the Corps uses its hopper dredges for projects, the Corps uses funds allocated for those specific projects to pay its dredges, based on the number of days its dredges work and the dredges’ daily rate. In response to our 2003 recommendation to obtain and analyze baseline data needed to determine the appropriate use of its hopper dredge fleet, the Corps established a tracking log as part of its raise the flag procedure to maintain and review urgent or emergency work its hopper dredges carry out, but it does not consistently collect certain solicitation information that we recommended. Having a means to track urgent or emergency dredging work helps the Corps ensure it is documenting and evaluating when and under what circumstances it will use its ready reserve dredges. According to Corps officials, the Corps established a tracking log in 2007 to systematically track information on the circumstances when urgent or emergency hopper dredging may be needed, and specifically when Corps’ dredges would be used to meet those needs. Corps district offices that are faced with critical hopper dredging needs submit information on their plans to address the needs to their division and Corps headquarters for review and approval. The Corps’ decision-making process for determining whether to use its ready reserve vessels is also documented via its tracking log. For example, in January 2013, a hopper dredge was needed to perform work along the North Carolina coast because certain areas had become severely shoaled and were impeding safe navigation. One industry bid was received to perform the work, but it exceeded the government cost estimate by more than 25 percent. After determining its cost estimate was reasonable, the Corps negotiated with the industry bidder in an attempt to get the bid within an awardable range of the Corps’ cost estimate, but the parties were unable to come to an agreement. As a result, the Corps initiated its raise the flag procedure because of the urgent nature of the situation. Because no other industry contractors were available immediately to respond, the Corps used the McFarland to perform the dredging and documented its decision-making process in its tracking log. We also recommended that the Corps obtain and analyze other data that could be useful in determining the appropriate use of the Corps’ hopper dredges, including data on solicitations that receive no bids or where all the bids received exceeded the Corps’ cost estimate by more than 25 percent. Corps officials we spoke with said that they are aware when a no-bid or high-bid situation occur, particularly when they use a Corps dredge through their raise the flag procedure because of such a situation. But by tracking and analyzing no-bid and high-bid solicitation data, the Corps may be better positioned to identify gaps in industry’s ability to fulfill certain dredging needs—such as during certain times of the year, in particular geographic areas, or for particular types of projects—and avoid or address any gaps identified. In 2004, the Corps took steps to address our recommendation by modifying data fields in its dredging database, the Corps’ database for maintaining dredging information on each of its dredging projects, to collect data on no-bid and high-bid solicitations. We found, however, that data for these solicitations were not consistently entered into the database across the Corps district offices responsible for entering it. In our review of the Corps’ dredging database, we found that one district office entered data on no-bid and high-bid solicitations. Corps officials from several district offices told us that entering information into the database is tedious and time-consuming. They also indicated that they do not enter information for all data fields because the officials primarily use information from the database for planning and scheduling future dredging work, not for reviewing data on past solicitations or solicitations that did not result in an awarded contract, which would include no-bid and high-bid solicitations. Corps headquarters officials we spoke with recognized that tracking and analyzing data on no-bid and high-bid solicitations is important and could serve as a useful decision-making tool in planning future hopper dredging work. However, they have not provided written direction to the district offices to help ensure data on these solicitations are consistently entered into the database. According to officials we spoke with, they have not done so because of other higher-priority action items. The officials added that they have made efforts to ensure district offices consistently enter accurate and complete data into the dredging database, such as emphasizing this activity during periodic meetings with district offices. These outreach efforts have been targeted at entering data into the dredging database as a whole, however, and have not focused specifically on the importance of the data field for tracking no-bid or high- bid solicitations, according to the officials. Federal internal control standards state that management should develop written policies and procedures that staff are to follow as intended. Without complete data on no-bid and high-bid solicitations, the Corps may be missing opportunities to plan future hopper dredging work that identifies and addresses potential gaps in industry’s ability to fulfill certain dredging needs based on this solicitation information. In response to our recommendation to assess the data and procedures used to perform the cost estimate used when contracting dredging work to the hopper dredging industry, the Corps took several actions to improve its cost estimates, but some of the information it relies on remains outdated, such as its dredge equipment cost information dating back to the late 1980s. In 2004, and again in 2008, the Corps took actions to evaluate and update certain cost data used in its cost estimates. In 2004, the Corps prepared an internal document that summarized the steps it took to analyze, evaluate, and update certain cost data used in its cost estimates. For example, according to the document, the Corps examined repair and maintenance costs for industry hopper dredges and updated some data for dredge engines. In 2008, the Corps partnered with the Dredging Contractors of America (DCA)—a national association for the dredging industry—to update industry cost data. Corps documentation related to the effort indicated that the Corps learned important information through discussions with industry, and a senior Corps cost-estimating official that we spoke with said that, on the basis of these discussions, the Corps updated the training it provides to Corps staff on preparing hopper dredge cost estimates. Some of the data the Corps uses in preparing its hopper dredging cost estimates, however, remain outdated despite the Corps’ attempt to update the information. Specifically, the Corps has not obtained updated technical data on industry hopper dredge equipment or labor rates but instead is relying on outdated information, some of which dates back to the late 1980s. During efforts to update the Corps’ cost-estimating data in 2008, the Corps prepared a survey to collect industry dredge equipment information from the five dredging companies that owned hopper dredges. In cooperation with the Corps, DCA sent the survey to the companies. In the August 2008 letter accompanying the survey, the dredging association stated that “much of the cost basis the Corps uses for industry dredges is old data and limited due to lack of industry input” and noted that the Corps’ ability to obtain the data would be mutually beneficial to the companies and the Corps. Among other things, data the survey sought to collect included costs of dredge acquisition, capital improvements, and certain types of repairs. Efforts to obtain these data were unsuccessful, however, due in part to industry’s concerns about sharing business-sensitive data with the Corps. Industry representatives from one hopper dredging company we spoke with explained that they were concerned that cost data provided to the Corps might become accessible to their competitors and therefore the data were not provided. A senior Corps cost-estimating official we spoke with told us that the Corps limits the release of cost data used in preparing cost estimates within the Corps and that updated industry cost data would assist the Corps in preparing its cost estimates for hopper dredge work. The official also stated that other efforts could be made to obtain updated cost data, including performing a Corps-wide study to evaluate information from each Corps district office with hopper dredging contracts or reviewing contract audits. The Corps, however, has no plans for conducting such a study. In conducting a study, the Corps could assess the most effective and efficient approach for obtaining updated cost data, including examining whether and to what extent it would base its study approach on a review of contracts or contract audits, working directly with industry, or other approaches. Federal internal control standards state the need for federal agencies to establish plans to help ensure goals and objectives A written plan would assist the Corps in obtaining updated can be met.cost data and following sound cost estimating practices, as described in our 2009 cost estimating and assessment guide, which is a compilation of cost-estimating best practices drawn from across government and industry. Obtaining reliable and up-to-date data are important for developing sound cost estimates, and the Corps’ cost estimate credibility may suffer if technical data are not updated and maintained, as noted in our cost estimating guide. In response to our 2003 recommendation that the Corps prepare a comprehensive analysis of the costs and benefits of existing and proposed restrictions on the use of the Corps’ hopper dredge fleet, the Corps prepared an analysis of its fleet for a 2005 report to Congress.its report, the Corps analyzed a number of options for operating its hopper dredges and made a recommendation to Congress for adjusting its fleet based on costs and benefits outlined in its analysis. The Corps recommended an option that it said would, among other things, ensure there was a viable reserve capability ready to respond to unforeseen requirements and ensure the timely accomplishment and reasonable cost for federal projects requiring hopper dredges. Under the option it recommended, the Corps would have (1) increased the Essayons’s dredging by about 35 days, and kept the Yaquina’s dredging days the same; (2) continued to keep the Wheeler in ready reserve; and (3) retired the McFarland. The Water Resources Development Act of 2007 did not specifically address these recommendations, but instead placed the McFarland in ready reserve and removed the then-existing restrictions on the Essayons and Yaquina. Since 2003, statutory restrictions on the use of the Corps’ hopper dredges have resulted in additional costs, but it is unclear whether the restrictions Restrictions have affected competition in the hopper dredging industry.effectively limiting the number of days that Corps dredges can work have resulted in additional costs to the Corps, such as costs to maintain the ready reserve vessels while idle. On the other hand, the restrictions help ensure the Corps’ ability to respond to urgent and emergency dredging needs when industry dredges may be unavailable. The extent to which restrictions on the use of the Corps’ hopper dredges have affected competition in the dredging industry—as measured by the number of companies with hopper dredges and the number of bidders and winning bid prices for Corps projects—is unclear, based on our analysis of data on industry bids per Corps solicitation and other factors. Since 2003, statutory restrictions on the use of the four Corps’ hopper dredges—in particular, the Wheeler and the McFarland—have resulted in additional costs to the Corps. First, the vessels have needed annual funding to maintain them in ready reserve because, given their limited use, the Corps is unable to recoup their costs with revenues from dredging work. The Corps incurs many of the costs for its hopper dredges—such as paying a crew and keeping engines and other systems in ready working condition—regardless of how much the dredges are used. For instance, placing the McFarland in ready reserve resulted in a substantial decrease in its dredging work (as measured in days worked and amount of material removed) but a relatively small decrease in its operating costs. As shown in table 2, the average annual cubic yards of material removed by the McFarland declined by 60 percent, while its average annual operating costs declined by 16 percent. Annual funding needed to maintain the Wheeler and the McFarland in ready reserve, which is provided through the Corps’ civil works appropriation, has increased since 2003. Specifically, in fiscal year 2003, ready reserve funding for the Wheeler was $7.6 million, and it increased to $13.6 million in fiscal year 2012. In addition, the McFarland has received ready reserve funding of over $11 million each fiscal year since it was placed in ready reserve, resulting in total ready reserve funding for the vessels of over $25 million in fiscal year 2012 (see fig. 3). Second, the ready reserve restrictions have contributed to increases in the daily rate the Corps charges projects for use of the Wheeler’s service, and future increases in the McFarland’s daily rate may also be needed if it experiences unanticipated cost increases. Increases in daily rates may result in either increasing costs, fewer cubic yards of material removed, or both, for the projects that use the Wheeler and McFarland —primarily projects in the Delaware River and the Mississippi River mouth, respectively. Officials from Corps headquarters and district offices responsible for the ready reserve hopper dredges told us they set the dredges’ daily rates in part based on how many days they expect the dredges to work in the coming year and that, in the case of the Wheeler, the limited dredging days since being placed in ready reserve have contributed to higher daily rates. For instance, the Wheeler’s daily rate has increased from $75,000 in fiscal year 2003 to $140,000 in fiscal year 2012, and the Corps expects a rate of $165,000 during fiscal year 2014. Furthermore, although costs for industry hopper dredge work have also increased, officials from a Corps district office that historically used the Wheeler told us that they would now be reluctant to use the vessel instead of an industry hopper dredge because of its high daily rate. In the case of the McFarland, the Corps has increased the vessel’s daily rate from $94,000 in fiscal year 2009 (the last full fiscal year before it was placed in ready reserve) to $100,000 in fiscal year 2012, and officials said they planned to increase and then maintain the daily rate at $110,000 for the next several fiscal years. If there are unanticipated increases in costs for the McFarland, however, such as an unexpected increase in repair costs, Corps officials said they would likely have to increase the vessel’s daily rate to cover such costs. As the officials explained, they set the McFarland’s daily rate with an expectation that the vessel will work 70 days because the ready reserve restrictions do not allow them to increase the number of days the McFarland can work. Therefore, raising the vessel’s daily rate would be the Corps’ primary option to cover an increase in costs. On the West Coast, restrictions on the number of days the Corps’ hopper dredges Essayons and Yaquina could work had led to inefficiencies in completing their work before those restrictions were lifted by the Water Resources Development Act of 2007, according to Corps officials. Before the 2007 act, the Essayons and the Yaquina were restricted to working about 180 workdays annually and, for several years, they reached their operating limits and, therefore, had to return to port before the projects they were working on were finished. The dredges were then sent back to complete the projects once the new fiscal year began, which was in October when weather conditions had begun to deteriorate. As a result, the Corps incurred additional transit and payroll costs while returning to complete the projects. Since the restrictions on these dredges were removed under the 2007 act, Corps officials said they have not had to interrupt ongoing work due to operating limits on the dredges and have had greater flexibility regarding when to perform work. The ready reserve restrictions on the Wheeler and McFarland help ensure that they are available to the Corps for responding to urgent and emergency dredging needs, especially in the regions where the dredges are stationed. Demand for hopper dredging often varies substantially from year to year, and month to month, due in part to severe weather events such as hurricanes and floods, other events such as the Deepwater Horizon oil spill in 2010, or environmental restrictions that limit dredging work to certain months of the year. This variability has resulted in periods of high demand during which the Corps has used its ready reserve hopper dredges to respond to urgent or emergency dredging needs when industry hopper dredges were not available. As the Corps noted in its 2005 report to Congress, having the Wheeler in ready reserve is important to ensure that the vessel is available when unforeseen dredging needs occur, while more fully utilizing the Wheeler could limit the Corps’ capability to respond to peak workload demands. Specifically, the Corps has used the Wheeler to respond to urgent or emergency dredging needs 15 times during fiscal years 2003 through 2012. In these cases, according to Corps documents, industry dredges were unavailable to immediately respond to time-sensitive dredging needs at the mouth of the Mississippi River, and the Corps was able to quickly move the Wheeler to the site and conduct the work. Similarly, local pilots and a local port authority we spoke with told us that the McFarland has been critical in addressing dredging needs on the Delaware River and Bay, where the vessel is stationed in ready reserve. Since its placement in ready reserve at the end of 2009, the Corps has used the McFarland to respond to urgent or emergency needs 4 times. Industry representatives from most dredging companies we spoke with agreed that there is a need for Corps hopper dredges, specifically those placed in ready reserve, to respond to urgent or emergency situations when industry hopper dredges are unavailable. Since 2003, the extent to which restrictions on the use of the Corps’ hopper dredges have affected competition in the dredging industry—as measured by the number of companies with hopper dredges and the number of bidders and winning bid prices for Corps projects—is unclear. A possible benefit of restrictions on the amount of work performed by the Corps’ hopper dredges is that the increased demand for industry hopper dredging services could encourage existing firms to add dredging capacity or new firms to enter the market, which could promote competition, raising the number of bidders and lowering winning bid prices for hopper dredging contracts. In addition, according to dredging industry representatives we spoke with, the more industry dredges can be utilized instead of Corps dredges, the lower the contract prices will be because contractors can spread their costs over more days of operation. However, on the basis of our analysis of (1) the dredging industry, (2) the number of bidders and bid prices for Corps dredging contracts, and (3) other factors that may have affected the level of competition for hopper dredging contracts, it is unclear whether or to what extent the restrictions on the Corps’ hopper dredges may have increased the level of competition in the hopper dredging industry. First, since 2003, the number of companies with hopper dredges in the United States has not changed, although the number of industry hopper dredges and the total size of these dredges have decreased. Specifically, at the end of 2013, five companies operated one or more hopper dredges. The same number of companies operated hopper dredges in 2003. Of the five companies we reported on in 2003, two sold their hopper dredges and exited the hopper dredging market while two new companies that had not been in the market acquired hopper dredges, and three companies remained the same. Since 2003, the total number of industry vessels decreased from 16 to 13, and the total capacity of these vessels, as measured in cubic yards, decreased by 16 percent. The decrease from 16 to 13 vessels resulted from one company relocating four of its U.S. hopper dredges overseas to perform dredging work primarily in the Middle East,dredge for the U.S. market. In addition, as of January 2014, one company had begun building a new hopper dredge that it expects will be completed in late 2014 or early 2015, and another company announced plans to build a new hopper dredge that it expects will be completed in 2015. If no companies remove existing hopper dredges from the U.S. market, these two dredges, if built as planned, would increase total industry capacity to 13 percent above 2003 levels. According to industry representatives with whom we spoke, dredging companies consider restrictions on the Corps’ hopper dredges in deciding whether to acquire or build a new hopper dredge, but they also consider other factors, such as anticipated funding levels by the Corps, as well as nonfederal work. while another company built a new hopper Second, we did not find evidence of increased competition based on the number of bidders and winning bid prices for Corps hopper dredging projects since 2003. Economic principles suggest that an increase in the number of competitive bidders in the market should lead to lower prices. The correlation between the number of companies competing for hopper dredging contracts and the winning bid prices for those contracts is As shown in figure 4, in demonstrated by the Corps’ historical data. years where there were more industry bids per Corps solicitation, the average winning industry bid, as a percentage of the Corps’ cost estimate, was generally lower, consistent with economic principles. Moreover, available Corps data related to the placement of the McFarland in ready reserve do not show evidence of increased competition in the dredging industry. Specifically, as shown in table 3, after the McFarland was placed in ready reserve, average winning bid prices increased for East Coast maintenance projects (i.e., projects the McFarland might undertake if use of the vessel were not restricted), and the average number of bids for those same projects decreased slightly. Third, other factors aside from the ready reserve restrictions may have affected the level of competition in the dredging industry since 2003. Examples of such factors include the following: Environmental restrictions. Multiple Corps officials and industry representatives told us that environmental restrictions related to endangered sea turtles and other species—which prohibit dredging during the time of year that those species are present—have contributed to fewer bidders for hopper dredging projects, particularly on parts of the East Coast. For instance, because of environmental restrictions, navigation dredging in fiscal year 2014 is limited to December 15, 2013, through March 31, 2014, in much of the Corps’ South Atlantic Division, during which time there are 48 potential Corps dredging projects planned, according to a 2013 Corps planning document. Corps officials attributed the absence of awardable bids for several recent East Coast hopper dredging solicitations to the unavailability of industry hopper dredges when the projects were scheduled to occur—during the period of high demand for hopper dredges caused by environmental restrictions. In addition, they expressed concern that similar shortages of bids could occur in the future. Coordination among Corps district offices. Increased coordination in scheduling hopper dredging projects across Corps district offices has helped distribute projects more evenly over time so that more companies had hopper dredges available with which to bid on projects, according to Corps officials. In contrast, when a large number of projects occur at the same time, dredging companies may not have enough dredges available to bid on all projects, thereby reducing the number of bidders for the projects. According to Corps officials we spoke with, increased regional coordination and sharing of up-to-date information on upcoming dredging needs across district offices has helped the Corps to better inform industry of planned work and align the scheduling of projects with the availability of industry dredges. In particular, Corps officials said increased coordination helped the Corps avoid scheduling too many projects simultaneously during a period of increased demand for hopper dredging work following Hurricane Sandy and a Gulf Coast rebuilding effort to protect against the coastal impacts of oil spills. Demand for nonfederal hopper dredging work. Corps officials and industry representatives also told us that demand for hopper dredging work from states, private sources, and foreign governments has reduced the number of industry hopper dredges available for Corps projects. For instance, following the Deepwater Horizon oil spill in 2010, there was an increase in private and state funding for hopper dredge work to construct barrier islands to protect the coastline from the effects of the oil spill. Demand for hopper dredges for this work affected the dredges’ availability for Corps navigation projects, according to Corps documents and officials, and industry representatives. In addition, representatives from one company said that, in part, because of increasing demand for hopper dredges from foreign governments—specifically in the Middle East—the company relocated several hopper dredges overseas, removing them from the U.S. market. Differences in hopper dredge capabilities. Because there are important variations in the size and capabilities of hopper dredges, the requirements of specific dredging projects can result in a limited number of dredges that may be able to effectively compete for a particular dredging project. For instance, the state of California requires hopper dredges to use reduced-emissions engines, in accordance with state air quality regulations. Of the 13 industry hopper dredges, only 3 have such engines, according to a Corps official. Similarly, according to Corps documents, a hopper dredge working at the mouth of the Columbia River in Oregon must be able to dredge against strong currents and endure large waves—capabilities that less than half of the industry fleet possesses, according to a Corps official. Other requirements, such as the depth of the waterway being dredged, or whether the material removed needs to be pumped onto the shore, can also limit which dredges can effectively compete for and carry out the work. Key challenges the Corps faces in managing its hopper dredge fleet are (1) ensuring the fiscal sustainability of its hopper dredges and (2) making decisions about the future of its hopper fleet composition, including the utilization of its existing fleet, changes to its existing fleet—including repairs, and the replacement or retirement of any vessels—and the utilization of any new replacement vessels. The Corps faces challenges in ensuring the fiscal sustainability of its hopper dredges. In a 2012 study the Corps conducted on the fiscal condition of its hopper dredges, it identified increasing ownership and operating costs for its four hopper dredges, among other things, as a cause for concern and stated that the dredges would become unaffordable unless actions were taken. For instance, the Corps’ study projected that, in fiscal year 2012, the Corps’ total end of fiscal year account balance for its four hopper dredges would exceed their funding levels by over $15 million dollars, and that fiscal problems would continue for the four hopper dredges through fiscal year 2016. The Corps stated in the study that it was concerned that project funding, which the Corps’ hopper dredges depend on to varying degrees, was not increasing and, in some cases, was decreasing. The Corps’ 2012 study identified several actions to take to operate all of its hopper dredges with a positive account balance by the end of fiscal year 2015. For example, based on the study, a corresponding July 2012 implementation memorandum, and our discussions with Corps officials, the Corps increased the daily rates all four of the Corps’ hopper dredges charge to projects that use the dredges, beginning in fiscal year 2012; increased funding in fiscal years 2013 and 2014 budgets for projects that use its hopper dredges to compensate for the vessels’ corresponding increases in daily rates; and formed a team to conduct a hopper dredge operating cost review including, among other things, an evaluation of the affordability of two hopper dredges, the Wheeler and the Yaquina, by June 30, 2014. A Corps official told us that the August 2013 grounding accident that the Essayons’ experienced while dredging made the vessel inoperable for about a month while it underwent repairs. In the case of the Wheeler, a Corps official estimated that the delays in replacing the Wheeler’s engines caused the vessel to remain out of operation at least 4 months more than the Corps initially planned. In addition, during this time, a cruise vessel broke free from its moorings during a storm and collided with the Wheeler when it was in the repair yard, which further delayed the Wheeler’s return to work, according to the Corps. deficit by about $2 million, increasing the vessel’s daily rate in fiscal year 2014 and dredging work in fiscal years 2014 and 2015 would give the vessel a positive account balance. Corps officials acknowledged that the Wheeler’s situation was more precarious because it ended fiscal year 2013 with a deficit of over $5 million more than projected in the Corps’ 2012 study, given the engine replacement delay. To get the Wheeler to a positive account balance by the end of fiscal year 2015, Corps officials said that they anticipated increasing the Wheeler’s daily rate and potential dredging activity to more than 70 days under ready reserve in fiscal year 2014. Corps officials said they believe they have some flexibility with the number of days the vessel can dredge since there is not a set amount specified in statute. Corps officials stated they are not planning further actions beyond those identified in the 2012 study at this time, but they acknowledged that additional measures, such as pursuing a permanent increase in the amount of days that the Wheeler may dredge each year under ready reserve, might be warranted if the vessel’s fiscal situation does not improve by the end of fiscal year 2014. The Corps also faces challenges in making decisions about the future composition of its hopper dredge fleet. Some of the factors that make it difficult for the Corps to determine what composition of its fleet would best allow it to conduct dredging activities in the manner most economical and advantageous to the United States include the following: Aging Corps’ fleet. The aging of the Corps’ hopper fleet, contrasted with the millions of dollars the Corps has invested to upgrade the vessels, has made it challenging for the Corps to determine the long- term sustainability of its hopper dredges. Three of the Corps’ four hopper dredges—Essayons, Wheeler, and Yaquina—have been in service for at least 30 years, and the McFarland has been in service over 45 years. According to Corps documentation, the Corps plans a 50-year investment life for its hopper dredges and, based on historical records, major repairs are typically needed when a dredge is about 30 years old. Since 2009, the Corps has invested millions of dollars in replacing and upgrading needed equipment on its four hopper dredges. For example, among other things, the Essayons, Wheeler, and Yaquina all had their engines replaced within the last 5 years allowing them to meet higher air quality emission standards. Similarly, the McFarland’s electrical systems were replaced in fiscal year 2011, which increased the vessel’s efficiency, since many of the systems were original equipment. According to Corps documents and officials, overall, all four hopper dredges are in good operating condition, but given the age of the vessels, the Corps has recognized the need to assess future repair or replacement options for its hopper dredges. Effects on industry. Because the Corps relies on both its own dredges and industry dredges to complete hopper dredging work, it needs to factor in both fleets in making future decisions about the composition of its own fleet. As of March 2014, 13 hopper dredges in the U.S. industry fleet had been in service for an average of about 27 years, though information on the extent to which these vessels have been maintained, upgraded, or may be close to going out of service has not been shared by industry with the Corps. During a discussion with industry representatives, however, representatives said that the hopper dredging industry is driven by competition, and they maintain their dredges to be as efficient as possible to improve their competitiveness in the market. Corps officials from several district offices we spoke with said that, because of the increasing use of industry hopper dredges for nonfederal beach nourishment projects, as well as anticipated increases in federal hopper dredging projects, industry’s availability to respond to the nation’s navigation dredging needs may be stretched. These officials said that, as a result, maintaining the Corps’ current fleet composition and perhaps increasing the use of some of the vessels, may be warranted. In contrast, most of the industry representatives we spoke with said they believe that industry has the ability to handle any increases in dredging projects, and the Corps’ fleet should be further restricted or even reduced. These representatives stated that if the Corps increased its hopper dredge capability, then industry’s portion of the overall dredging work would be reduced possibly leading companies to increase prices to cover their operating costs or potentially relocate their hopper dredges overseas. Funding uncertainties. Variability regarding federal funding for dredging also poses challenges to the Corps’ plans for its fleet. While funding for hopper dredging has increased since fiscal year 2003 and was about $370 million in fiscal year 2012, Corps officials and stakeholders we spoke with said that, at recent funding levels, there were substantial unmet hopper dredging needs such as providing dredging for small ports and harbors. Corps, 2011 Minimum Fleet Capital Investment Report. This study encompassed all 10 dredges in the Corps’ minimum dredge fleet, which includes the 4 hopper dredges reviewed in this report, and 6 other dredges of different types that are generally used for different dredging projects. conducting a life-cycle cost analysis to support funding plans for future dredging needs which would include a cost comparison to either (1) use and then replace the vessels or (2) repair and sustain the vessels. The 2011 study developed options based on three funding scenarios— increased, sustained, or decreased—and, as stated in the study and the Corps’ implementation memorandum, the Corps selected the option associated with sustained funding levels as the best course of action. Should increased funding become available for dredging, a Corps official we spoke with said the Corps may need to adjust its planned course of action. The officials said that the 2011 study could provide the Corps with direction for adjusting its actions. For example, as noted in the study under the increased funding scenario, the Corps could continue with its planned fleet improvements instead of deferring them under the sustained option. Hopper dredges play a vital role in keeping the nation’s ports, harbors, and other waterways open for commerce. Over the past several decades, the Corps has increasingly relied on industry to carry out hopper dredging work, but it has also maintained its own minimum fleet of four hopper dredges, in part to ensure its ability to respond to critical dredging needs during periods of high demand. The Corps is faced with the task of balancing the hopper dredging work it contracts out to industry and maintaining the viability of its own fleet. The Corps has recognized the need to make changes to manage its hopper dredge fleet in a fiscally sustainable manner and has taken several actions to do so, including assessing the need to potentially modify the composition of its fleet. Since our 2003 report, the Corps has also made progress in addressing our recommendations to improve the information it maintains to manage its hopper dredging program, including modifying data fields in its dredging database to track solicitations that receive no bids or where all the bids received exceeded the Corps’ cost estimate by more than 25 percent. However, because Corps district offices are not consistently populating the database with these solicitation data, the Corps does not have accurate or complete information that may help it identify potential gaps in industry’s ability to fulfill certain dredging needs, which could inform its plans for future hopper dredging work. Additionally, the Corps made attempts to update the industry cost data it uses to prepare its cost estimates for hopper dredging contracts. Yet, some of the data it relies on remain outdated, and the Corps has no plans to update the information, such as through a Corps-wide study. Until the Corps has a plan for obtaining and then consistently updating reliable cost data, the Corps’ ability to ensure the soundness of its cost estimates may suffer. We recommend that the Secretary of Defense direct the Corps of Engineers to take the following two actions: To ensure the Corps of Engineers has the information it needs to analyze and make informed decisions regarding future hopper dredging work, provide written direction to its district offices on the importance of and need to accurately and consistently populate the data fields in its dredging database that track solicitations that receive no bids or where all the bids received exceeded the Corps’ cost estimate by more than 25 percent. To assist the Corps in preparing sound and credible cost estimates for soliciting bids for hopper dredge work by industry, develop a written plan for conducting a study to obtain and periodically update data on hopper dredging costs for its cost estimates, including reliable data on industry hopper dredge equipment and labor rates. We provided a draft of this report to the Department of Defense and the Dredging Contractors of America (DCA) for review and comment. In its written comments, reprinted in appendix III, the Department of Defense concurred with our recommendations and stated that (1) the Corps will issue a letter to the district offices reinforcing the need to provide accurately and timely information in the Corps’ dredging database, including information for solicitations that receive no bids or where all the bids received exceeded the Corps’ cost estimate by more than 25 percent and (2) the Corps will develop a written plan as resources allow. The Corps also provided technical comments that we incorporated, as appropriate. DCA provided written comments, which are summarized below and reprinted in appendix IV along with our responses. DCA neither agreed nor disagreed with our recommendations but disagreed with several statements in our report and raised objections to certain aspects of our scope and methodology. We disagree with DCA’s comments as discussed below. Specifically, in its comments, DCA disagreed with our statement that a direct and valid comparison of work performed by industry to work performed by the Corps is not possible and stated that a third-party consultant performed an analysis of the Corps and industry hopper dredges performing similar work. According to DCA’s comments, the industry hopper dredges can work for significantly less than Corps dredges. As we state in our report, we believe that a number of factors prohibit a direct and valid comparison of the Corps’ and industry’s costs of performing hopper dredge work, including limits to the number of days some Corps’ dredges may operate and differences between dredging projects, such as the type of material dredged. In providing its estimates of cost savings for industry dredging, DCA did not provide information indicating how or whether it took such factors into account or to enable us to evaluate the reasonableness of its estimates. DCA also questioned how, if one of the fundamental conclusions of our study is that the Corps has not made sufficient progress to improve the accuracy of its cost estimates, we could use those same government cost estimates to make industry competitiveness inferences. We concluded, however, that it is unclear whether statutory restrictions have affected competition in the hopper dredging industry. In reaching that conclusion, we analyzed a number of factors—including the number of companies with hopper dredges, the number of bidders and winning bid prices for Corps projects, and other factors such as environmental restrictions, the demand for nonfederal hopper dredging work, and differences in hopper dredge capabilities. We agree that obtaining reliable and up-to-date data are important for developing sound cost estimates, and our report recommends that the Corps develop a written plan for conducting a study to obtain and periodically update data on hopper dredging costs for its cost estimates. DCA disagreed with our discussion on the capacity of the industry hopper dredge fleet, stating specifically that one industry dredge, the Long Island, should not have been included in our analysis because it had not been used for maintenance dredging and had not been used on a project for quite a few years. For our report, we did not limit our analysis to particular types of hopper dredging projects, such as maintenance projects, and we compared industry’s total capacity today with what we reported in 2003, which we believe is a valid comparison. Moreover, in its comments on our 2003 report on hopper dredging, DCA included the Long Island in its list of industry dredges to support its point that industry hopper dredging capacity had increased in the decade leading up to 2003. As a result, we continue to believe it was appropriate to include the Long Island as a part of our analysis. DCA stated our analysis of how the Corps’ manages its hopper dredges was not comprehensive or objective and questioned why we did not examine options for retiring or further reducing the use of Corps’ dredges. DCA suggested that such an examination should take place and would be in line with the congressional intent of increasing the use of private industry dredges. However, DCA quotes selectively from the main statute that governs the Corps' hopper dredging activities. While those portions of the law read in isolation could suggest that the Corps should take further steps to privatize its hopper dredge work, other provisions of the same law either (1) give the Corps broad discretion to implement its hopper dredge responsibilities or (2) directly restrict the Corps' ability to reduce or eliminate Corps’ dredges. It was not the purpose of our report to examine policy options for carrying out the Corps’ hopper dredge work, including those not presently authorized under statute. We did examine and discuss actions the Corps has taken or plans to take in managing its hopper dredges, which include, among other things, conducting a hopper dredge operating cost review and evaluating retirement or replacement options. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Defense, Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers, and the appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. This report examines (1) the actions the Corps has taken to address our 2003 recommendations for improving the information needed to manage its hopper dredging program and develop cost estimates for industry contracts; (2) the effects since 2003, if any, of the statutory restrictions placed on the use of the Corps’ hopper dredges; and (3) key challenges, if any, the Corps faces in managing its hopper dredge fleet. To conduct our work, we reviewed Pub. L. No. 95-269, which established the Corps’ minimum fleet, the Water Resources Development Acts of 1996 and 2007, and other laws, regulations, and Corps’ policy and guidance governing the Corps’ use of hopper dredges. We interviewed officials from Corps headquarters, division offices, and the 9 Corps district offices with the largest hopper dredging workload during fiscal year 2003 through fiscal year 2012 (out of a total of 17 district offices that contracted with industry for hopper dredging work during the time period): Galveston, Jacksonville, Mobile, New Orleans, New York, Philadelphia, Portland, San Francisco, and Seattle. We also visited the Corps’ four hopper dredges and one industry hopper dredge for informational tours of these vessels to gain a better understanding of their physical characteristics and operations. We interviewed representatives from the national association for the dredging industry, the Dredging Contractors of America, and the five dredging companies that own and operate hopper dredges—Cashman Dredging, Dutra Group, Great Lakes Dredge & Dock Company, Manson Construction Co., and Weeks Marine, Inc. We also interviewed other stakeholders involved in hopper dredging, including a national pilots’ association and a national port authority association, and local pilots’ associations and port authorities from the areas where Corps hopper dredges are stationed—New Orleans, LA; Philadelphia, PA; and Portland, OR. We focused our review on the 10-year period between fiscal year 2003—when we conducted our previous review of the Corps’ —and fiscal year 2012—the most recent year for which hopper dredgesCorps information on hopper dredging was readily available. In addition, we focused our review on the four hopper dredges in the Corps’ minimum dredge fleet during the period of our review: the Essayons, McFarland, Wheeler, and Yaquina, and did not include other dredge types. U.S. Army Corps of Engineers, Report to Congress: Hopper Dredges (Washington, D.C.: June 3, 2005). numbers of bids and bid prices for the contracts. To assess the reliability of the data, we interviewed officials from the Corps’ Navigation Data Center who maintain the database, as well as officials from nine Corps district offices who are responsible for entering and updating data on their district offices’ dredging activities. We reviewed documentation related to the database, such as the user’s guide and data dictionary, and electronically tested the data for missing or erroneous values and, in several cases, obtained updated or corrected data from the Corps. We determined the data we used on the type and location of the dredging work, the type of contract, and the number of industry bids and bid prices for sealed-bid solicitations were sufficiently reliable for our purposes. We also analyzed financial data on the Corps’ hopper dredges, including their operating and ownership costs, and income from ready reserve funding. To assess the reliability of the Corps’ financial data, we interviewed Corps officials who maintain these data, compared the data to other sources of information on the Corps’ hopper dredges, and obtained clarifying information from the Corps for certain items such as ready reserve funding. We determined the data were sufficiently reliable for our purposes. We obtained and reviewed information from the five dredging companies that own and operate hopper dredges, including information on their hopper dredges’ capabilities, dredging work they performed, and changes to their hopper dredge fleet since 2003. We did not directly compare work performed by industry hopper dredges with work performed by the Corps’ hopper dredges because, as we first reported in 2003, a direct and valid comparison of the Corps’ and industry’s costs to perform hopper dredge work is not possible due to various factors. of its hopper dredges. In addition to reviewing the 2012 fiscal study, we also obtained and analyzed additional data related to the financial condition of the Corps’ hopper dredges. We also obtained and reviewed the Corps’ 2012 and 2013 implementation memorandums related to both studies and discussed with Corps officials the actions the Corps has taken—and plans to take—related to the memorandums. We examined changes and potential challenges the Corps faces related to managing its hopper dredge fleet, including dredging accidents, repair delays, and potential funding changes. We discussed general Corps fleet management and composition options with industry officials and the other stakeholders we interviewed. We conducted this performance audit from January 2013 to April 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As of March 2014, 17 hopper dredges were operating in the United States, 13 of which were owned by industry (see table 4). In addition, 2 industry hopper dredges are expected to be added to the U.S. fleet by 2015. The following are GAO’s comments on the letter from the Dredging Contractors of America dated March 11, 2014. 1. We believe that various factors prohibit a direct and valid comparison of the Corps’ and industry’s costs to perform hopper dredge work including: (1) design features in the Corps’ vessels in support of national defense missions, which add weight to the vessels and make them less efficient than industry dredges; (2) limits to the number of days some of the Corps’ vessels may operate; and (3) differences between dredging projects—such as type of material dredged, type of work, corresponding risk level, and distance from the dredging operations to the placement site. In providing its estimates of cost savings for industry dredging, DCA provided no information indicating how or whether its third-party consultant took such factors into account. DCA also did not provide enough information on the consultant’s analysis for us to be able to determine how it reached its conclusions that industry dredges can work for less than Corps dredges. Based on our work, we continue to believe, as we state in our report, that since 2003, statutory restrictions on the use of Corps’ hopper dredges have resulted in additional costs to the Corps. 2. DCA referred to three appendixes in their written comments. These appendixes included Excel spreadsheets with various dredging data. We did not reprint these spreadsheets with DCA’s written comments. 3. It was not the purpose of our report to evaluate policy options for carrying out the Corps’ hopper dredge work, including those not presently authorized by law, such as vessel retirements or alternative ready reserve methods. The Corps' authority to retire its hopper dredges or reduce their workload is limited by statute, and DCA did not indicate why it believes retirements would be consistent with existing law. According to statute, the Corps "may not further reduce the readiness status of any Federal hopper dredge below a ready reserve status except any vessel placed in such status for not less than 5 years that the Secretary determines has not been used sufficiently to justify retaining the vessel in such status."The Corps has made no such determination. In addition, the Corps may "not reduce the availability and utilization of Federal hopper dredge vessels stationed on the Pacific and Atlantic coasts below that which occurred in fiscal year 1996 to meet the navigation dredging needs of the ports on those coasts." In the Water Resources and Development Act of 2007, Congress directed the Corps to place the McFarland in ready reserve. But even assuming this provision implicitly repealed the prior statute as applied to the McFarland, the Water Resources and Development Act of 2007 provided that the McFarland must be maintained in a "ready reserve fully operational condition." Similarly, the law requires the Wheeler to be maintained in a "fully operational condition." Furthermore, the law assigns to the Corps the responsibility for carrying out hopper dredge work "in the manner most economical and advantageous to the United States." This language "evidences congressional intent to confer on the Army Corps wide discretion in matters relating to its dredging activities.” 4. We used only the Dredging Information System data that we determined were sufficiently reliable for our purposes. Specifically, as noted in our report, we used data on the type and location of dredging work, the type of contract, and the number of industry bids and bid prices for sealed-bid solicitations. DCA stated that, with the introduction of Multiple Award Task Order Contracting, our analysis of the number of bidders and bid prices may be distorted. As noted in our report, however, we limited our analysis to awarded, sealed-bid solicitations for which the Corps had reliable data on the numbers of bids and bid prices, and we did not include the procurement method mentioned by DCA. Our analysis of the Dredging Information System data indicates that about 76 percent of hopper dredging contracts awarded by the Corps from fiscal year 2003 through fiscal year 2012 (and about 89 percent of hopper dredging contracts awarded in fiscal year 2012 alone) were awarded through the sealed-bid process. 5. In characterizing urgent and emergency work in our report, we relied on the definitions outlined in the Corps’ raise the flag procedure, which we believe was the appropriate way to define and report on how the Corps collects and tracks the urgent or emergency work its hopper dredges carry out. Corps data show that urgent and emergency work have occurred from fiscal year 2003 through fiscal year 2012 as we state in our report. 6. We did not comment on the lack of evidence of increased competition based solely on the number of bidders and winning bid prices for Corps hopper dredging projects. Rather, we reached our conclusion—that it is unclear whether statutory restrictions have affected competition in the hopper dredging industry— after analyzing a number of factors, including the number of companies with hopper dredges, the number of bidders and winning bid prices for Corps projects, and other factors such as environmental restrictions, the Corps' efforts to better coordinate dredging activities, demand for nonfederal hopper dredging work, and differences in hopper dredge capabilities. See also comment 4. 7. We did not make industry competitiveness inferences based on the Corps’ cost estimates alone, see comment 6. We agree that obtaining reliable and up-to-date data are important for developing sound cost estimates, and our report recommends that the Corps develop a written plan for conducting a study to obtain and periodically update data on hopper dredging costs for its cost estimates. 8. We included the industry hopper dredge Long Island as available hopper dredge capacity in 2003, based on information provided by the Corps and DCA. In official comments on our 2003 report on hopper dredging, DCA included the Long Island in its list of industry dredges to support the point that industry hopper dredging capacity had increased in the decade leading up to 2003. This dredge was since removed from the U.S. market and, therefore, we factored its removal in our calculation of the change in overall industry capacity since 2003. We included all hopper dredging projects in our analysis and did not limit our analysis to maintenance projects. In addition, we did not examine use, but rather industry capacity. 9. During interviews with the industry representatives who owned the dredges that were removed from the U.S. market, we were told that the dredges were moved overseas, in part, because of increasing demand for hopper dredges by foreign governments, and, that the dredges have performed work overseas, indicating overseas demand. We also recognize a lack of work in the United States may have also been a factor in the relocation of these dredges, and we have added text to our report to note this. 10. We used the Corps’ definition of its minimum hopper dredge fleet in determining the scope of our review. The law establishing the minimum fleet gave the Corps discretion to determine the fleet’s size and composition. In addition, the capacity of the four Corps’ hopper dredges ranges from about 1,050 cubic yards to about 8,300 cubic yards, which is similar to the private industry hopper dredges’ capacity, which ranges from 1,300 cubic yards to 13,500 cubic yards. In contrast, the Murden and Currituck’s total capacity is 512 and 315 cubic yards, respectively, making them significantly smaller dredges than the hopper dredges in the Corps’ and private industry’s fleet. Moreover, the Murden was commissioned into active duty in May 2013, and it was, therefore, not part of the Corps’ fleet during the period of our review, from fiscal year 2003 through fiscal year 2012. 11. The law makes no reference to "training days" and does not impose a specific cap on the number of days for which the Wheeler may operate. The Corps has, as a matter of practice, scheduled training work for the Wheeler in order to "periodically perform routine tests of the equipment of the vessel to ensure the vessel's ability to perform emergency work.” 12. An examination of using industry dredges in a ready reserve mode was beyond the scope of this review. 13. In our report, we make frequent references to the fact that legislation placed the Wheeler and the McFarland in ready reserve, and we provide funding information for the Corps’ dredging program, including the specific funding to support the Wheeler and McFarland in their ready reserve status. We did not identify alternatives for how the Corps might reduce the costs to operate these vessels, but we did examine and discuss actions the Corps has taken or plans to take in managing its hopper fleet, which include, among other things, conducting a hopper dredge operating cost review and evaluating retirement or replacement options. 14. The way that hopper dredges recover their costs is by actively dredging, and, fewer days of work will equate to higher rates when work is performed because of the fewer days available to spread out costs. As noted in our report, daily rates for Corps hopper dredges have increased and may continue to increase due to several factors such as increasing fuel costs and changes in Corps accounting methods, in addition to ready reserve restrictions on two of the dredges. We did not quantify the extent to which individual factors contributed to increases in daily rates, rather we report that restrictions on the number of days ready reserve hopper dredges can work have contributed to increases in their daily rates. We agree that the Essayons, operating on the West Coast with no restrictions, has increased its annual costs and daily rates since becoming unrestricted. However, we found that the increase in the Essayons daily rate from $95,000 in fiscal year 2008—the last year in which it was restricted—to $100,000 in fiscal year 2012 was substantially smaller than that of the Wheeler, with a daily rate increase from $95,000 to $140,000 over the same period. 15. We agree that one basic congressional tenet of the Water Resources Development Act of 1996 was to increase the use of private industry hopper dredges but, as we have noted, the law also directly restricts the Corps' ability to reduce the use of or eliminate Corps’ dredges. See comment 3. We do not agree that collecting more solicitation information would result in enhanced opportunities for the Corps’ hopper dredges to be used more. Rather, we believe that in collecting this solicitation information, the Corps may be able to better plan for future hopper dredging work, whether done by industry dredges or Corps dredges. 16. Based on our review of Corps’ documentation related to the example cited, we found that industry was provided several opportunities to bid on the work. Specifically, after soliciting bids for the work and receiving only one bid, which was more than 25 percent above the government cost estimate, the Corps reviewed its cost estimate, found it to be reasonable, and began negotiations with the company that had submitted the bid. The parties were unable to agree on a price for the work, however, so the Corps then provided a second notification to industry, indicating that there was an urgent need for dredging. According to Corps documentation, no dredging company expressed both the availability and the capability to address the dredging need and, therefore, the Corps used one of its own dredges to complete the work. In addition to the individual listed above, Alyssa M. Hundrup, Assistant Director; Hiwotte Amare; John Delicath; Cindy Gilbert; Miles Ingram; Richard P. Johnson; Delwen Jones; Kirk D. Menard; Samuel Morris; Mehrzad Nadji; Dan Royer; and Tatiana T. Winger made key contributions to this report.
The Corps is responsible for dredging sediment from waterways to maintain shipping routes important for commerce. One dredge type, a hopper dredge, performs much of the dredging in ports and harbors, and the Corps uses its own fleet of hopper dredges and contracts with industry to carry out the work. In 2003, GAO examined the Corps' hopper dredging program and made recommendations to improve its management. GAO was asked to review changes to the program. This report examines (1) actions the Corps has taken to address GAO's 2003 recommendations for improving the information needed to manage its hopper dredging program and develop cost estimates for industry contracts; (2) effects since 2003, if any, of the statutory restrictions placed on the use of the Corps' hopper dredges; and (3) key challenges, if any, the Corps faces in managing its hopper dredge fleet. GAO reviewed laws, regulations, and policies governing the Corps' use of hopper dredges, and related Corps reports. GAO analyzed dredging contract and financial data for fiscal years 2003-2012, assessed the reliability of these data, and interviewed Corps and dredging stakeholders. The U.S. Army Corps of Engineers (Corps) has taken actions to address GAO's 2003 recommendations for improving information related to hopper dredging, but some data gaps remain. First, in response to GAO's recommendation to obtain and analyze data needed to determine the appropriate use of its hopper dredge fleet, the Corps established a tracking log to document urgent or emergency work its dredges carry out. The Corps also modified its dredging database to track solicitations for industry contracts that received no bids and bids exceeding the Corps' cost estimate by more than 25 percent, referred to as high bids. Corps district offices, however, do not consistently enter data on these solicitations, and Corps headquarters has not provided written direction to the district offices to ensure data are consistently entered. Tracking and analyzing no-bid and high-bid solicitation data could enable the Corps to identify and address gaps in industry's ability to fulfill certain dredging needs as the Corps plans its future hopper dredging work. Second, in response to GAO's recommendation, the Corps took action to assess the data and procedures it used for developing cost estimates when soliciting industry contracts. However, certain industry cost data the Corps relies on remain outdated. For example, some of the data it uses on hopper dredge equipment date back to the late 1980s. A senior Corps official stated that a study could be conducted to update the data, but the Corps has no plans to conduct such a study. Having a plan for obtaining updated data is important for developing sound cost estimates. Statutory restrictions on the use of the Corps' hopper dredges since 2003 have resulted in costs to the Corps, but the effect on competition in the hopper dredging industry is unclear. Restrictions limiting the number of days that Corps dredges can work have resulted in additional costs such as costs to maintain certain Corps dredges while they are idle; the Corps incurs many of the costs for owning and operating its hopper dredges regardless of how much they are used. The restrictions, however, help ensure the Corps has the ability to use these dredges to respond to urgent or emergency dredging needs when industry dredges are unavailable. It is not clear to what extent restrictions have affected competition in the dredging industry. The number of U.S. companies with hopper dredges has not changed, but the number and size of these dredges have decreased since 2003. In addition, GAO did not find evidence of increased competition based on the number of bidders and winning bid prices for Corps hopper dredging projects since 2003. Key challenges facing the Corps in managing its hopper dredge fleet are (1) ensuring the fiscal sustainability of its hopper dredges and (2) determining the fleet's appropriate future composition. In 2012, the Corps determined that because of increasing ownership and operating costs, among other things, its hopper dredges would become unaffordable unless actions were taken, including increasing the daily rates charged to projects using the Corps' dredges. Factors such as the aging of the Corps' fleet and the effect on industry of possible changes to the Corps' fleet make it difficult for the Corps to determine the best fleet composition. In studies it conducted in 2011 and 2012, the Corps identified actions that could help address these challenges, such as reviewing the operating costs of hopper dredges to evaluate the affordability of certain dredges. GAO recommends the Corps provide written direction to its district offices on consistently populating its database with no-bid and high-bid solicitations and develop a written plan for a study to obtain and periodically update certain hopper dredging cost data for its cost estimates. The Department of Defense concurred with the recommendations.
AOC and its major construction contractors have moved the CVC project forward since the Subcommittee’s June 14 hearing, although the majority of the selected milestones scheduled for completion by today’s hearing have not been completed on time. According to the construction management contractor, the base project’s construction was about 70 percent complete as of June 30, compared with about 65 percent as of May 31. The sequence 1 contractor, Centex Construction Company, which was responsible for the project’s excavation and structural work, has continued to address punch-list items, such as stopping water leaks. Although AOC had expected the sequence 1 contractor to complete the punch-list work and be off-site by June 30, some of this work remains to be done. The sequence 1 contractor has closed its on-site project office and plans to send workers back to the site to complete the remaining work. AOC has retained funds from the sequence 1 contractor that it believes will be sufficient to cover the cost of the remaining work. Furthermore, the sequence 2 contractor, which is responsible for the mechanical, electrical, plumbing, and finishing work, has continued to make progress in these areas, including erecting masonry block, placing concrete, and installing finish stone, drywall framing, plaster, and granite pavers. Many of the granite pavers that were installed on the plaza deck for the inauguration have to be replaced because of problems with quality or damage after installation. The sequence 2 contractor plans to replace these pavers when the plaza deck will no longer be needed for deliveries of construction materials. The sequence 2 contractor has also continued work on the utility tunnel, and in June, AOC executed a sequence 2 contract modification to construct the House connector tunnel. AOC expects this work to begin soon. As the Subcommittee requested, we worked with AOC to select sequence 2 milestones that the Subcommittee can use to help track the project’s progress from the Subcommittee’s May 17 hearing to July 31. We and AOC selected 22 milestones, of which 11 were scheduled for completion before June 14, 6 others before July 14, and 5 others before July 31. These milestones are shown in appendix 1 and include activities on the project’s critical path, as well as other activities that we and AOC believe are important for the project’s timely completion. As we reported during the Subcommittee’s June 14 hearing, AOC’s sequence 2 contractor completed 6 of the 11 selected activities scheduled for completion before that date—3 were completed on time and 3 were late. The remaining 5 activities had not been completed as of June 14. Of these 5, 4 have now been completed and as of July 12, 1 remained incomplete. In addition, as of July 12, the contractor was late in completing 1 of the 6 selected activities scheduled for completion between June 14 and July 14 and had not yet completed the remaining 5. AOC does not expect these delays to extend the project’s scheduled September 2006 completion date because it believes that the sequence 2 contractor can recover the lost time. A few months ago, AOC expected the utility tunnel to be operational in October 2005, but it extended that date to March 20, 2006, before the June hearing. The June schedule shows the tunnel being operational on March 7. The sequence 2 contractor has indicated that the impact of the October- to-March delay on CVC construction could be mitigated by using temporary dehumidification equipment, adding more workers to certain utility tunnel activities, or both. However, this mitigation approach would increase the government’s costs. We previously identified the utility tunnel as a project schedule and cost risk because of possible unforeseen conditions associated with underground work, and AOC and the sequence 2 contractor believe that such risk still exists with respect to the remaining tunnel work. Given this risk and the importance to the rest of the project of having the utility tunnel operational as soon as possible, AOC has asked the project team to explore options for accelerating the completion of the work necessary to begin the tunnel’s operations. We agree with AOC that delays in making this tunnel operational could have significant adverse effects on other project elements and that priority attention should be given to this area. Accelerating work may be cost-beneficial in this case. Since the June 14 hearing, the sequence 2 contractor has also encountered unforeseen conditions that, according to AOC’s construction management contractor, could delay the installation of stone on the Capitol’s East Front. Unless mitigated, this delay, in turn, could delay AOC’s estimated September 15, 2006, opening date. In fact, the June schedule shows a 24- day delay for this work, which is on the project’s critical path, and therefore pushes AOC’s scheduled date for opening CVC to the public to October 19, 2006. AOC and its construction management contractor are assessing the situation and expect to have more information on this problem within the next month. However, they believe that they will be able to recover the lost time by resequencing work, although they acknowledge that their mitigation approach would require sufficient stone to be available. The project has not been receiving stone in the quantities set forth in the delivery schedule—a risk that we previously identified— and AOC and its contractors have been taking action to address this problem, but have not yet resolved it. Mitigating this potential delay in East Front stone installation could increase the government’s costs if the mitigation involves, among other actions, expediting the installation to recover lost time. Our May 17 and June 14 statements contained several observations on AOC’s management of the project’s schedules, including our view that problems in this area contributed to slippage in the project’s scheduled completion date and additional project costs associated with delays. The statements also discussed recommendations we had already made to AOC to enhance its schedule management. AOC had agreed with these recommendations and had generally begun to implement them, but we believed that it still needed to give priority attention to them to keep the project on track and as close to budget as possible. An updated discussion follows of the issues that need AOC’s priority attention, along with current information on the status of AOC’s actions to address these issues. Having realistic time frames for completing work and obtaining fully acceptable schedules from contractors. Over the course of the project, AOC’s schedules have shown dates for completing tasks that project personnel themselves considered optimistic or unlikely to be met. In addition, the master project schedule (prepared by AOC’s construction management contractor) that AOC was using in May 2005 (the April schedule that AOC said it would use as a baseline for measuring progress on the project) did not tie all interrelated activities together and did not identify the resources to be applied for all the activities, as AOC’s contract requires. During the Subcommittee’s June 14 hearing, AOC said that it would reassess the time scheduled for tasks by today’s hearing. Since the Subcommittee’s June 14 hearing, AOC’s construction management and sequence 2 contractors reviewed the reasonableness of the time scheduled for 14 critical or near-critical activities and determined that, in general, the time shown in the May 2005 schedule reasonably reflected the time required to perform 11 of these activities. In addition, the sequence 2 contractor agreed to provide more detail about the 3 remaining activities so that the reasonableness of the time scheduled for them could be reviewed later. Although the contractors’ review did not involve a detailed, data-based analysis of the time scheduled for activities using such information as crew size and worker productivity, AOC’s construction management contractor said that it would do such analyses in the future, as appropriate. The construction management contractor said it has not yet done such an analysis for stonework because, to date, less stone has been delivered to the site than was expected and more stone workers have been available than could be used, given the shortage of stone. In AOC’s view, this stone shortage has begun to delay important activities, and as we previously indicated, AOC is working with its contractors to resolve the problem. According to AOC’s construction management contractor, both the project’s May and June 2005 master schedules (1) reflect significant improvement in the linkage of interrelated tasks, although the contractor recognizes that more work needs to be done in this area and (2) generally provide sufficient information to manage the project’s resources. However, the contractor also recognizes the need for the sequence 2 and other contractors to continue adding more detail to the activities scheduled for some project elements, such as the exhibit and expansion spaces, so that more of the interrelated activities will be linked in the schedule. The contractor also said that it will be continuously reassessing the extent to which construction contractors identify the resources they plan to apply to meet scheduled completion dates, as contractually required. Both adding detail to activities and identifying the resources to be applied are helpful in assessing the reasonableness of the time scheduled and in managing contractors’ performance. The sequence 2 contractor has provided a separate schedule showing its target dates for adding more detail to 30 project tasks. On July 8, AOC’s construction management contractor accepted the April project schedule, subject to several conditions. Because the May 2005 master schedule for the CVC project contains additional detail on activities and information on resources to be applied, we agree with AOC’s construction management contractor that this schedule represents an improvement over earlier schedules. However, we still have concerns about the extent to which the schedule links related activities, which the construction management contractor has agreed to address, and about whether AOC’s September 15, 2006, target date for opening the facility to the public is realistic. For the following reasons, we continue to believe that the project is more likely to be substantially completed in the December 2006 to March 2007 time frame than by September 2006: Because of unforeseen site conditions and other problems, AOC’s construction contractors have had difficulty meeting a number of milestones. The project still faces risks and uncertainties that could adversely affect its schedule. As we noted in our June 14 testimony, the number of critical and near-critical paths the construction management contractor has identified complicates schedule management and increases the risk of problems that could lead AOC to miss the scheduled completion date. Like the project’s May 2005 schedule, the June schedule shows seven paths that are critical or near critical. Among the critical paths are East Front stonework and some interior stonework, which slipped by 24 days and 3 days in June, respectively. In addition, some other interior stonework that is not generally on a critical path, such as the installation of wall stone in the Great Hall, has slipped by about 4 months since April because of stone shortages according to AOC. Continued slippages in interior stonework could make it difficult for the sequence 2 contractor to meet the September 15, 2006, completion date. Although the CVC project team believes that it can recover this time, its ability to do so is not yet clear, given the stone supply problem facing the project. Furthermore, although work on the utility tunnel progressed during June, the tunnel work continues to face risks and uncertainties that could delay the project, and the May and June schedules show that the start and finish dates for a number of activities have continued to slip. Although it is possible for AOC to recover this time, continued slippage could push so many activities to later dates that the contractors may not be able to complete all the work in the remaining available time. In our opinion, AOC lacks reasonable assurance that its contractors have accurately estimated the time necessary to complete work for a number of activities in the schedule. Although the construction management contractor’s recent review of how much time is needed to complete schedule activities was helpful, we are still concerned about the reasonableness of the time allowed for a number of the activities. For example, one of the activities reviewed in June whose scheduled duration was found to be generally reasonable was final occupancy inspections. Although AOC’s Fire Marshal Division is to do critical work associated with this activity, the duration review that took place since the June 14 hearing occurred without any input from that division, which is to conduct fire safety and occupancy inspections for the project and approve its opening to the public. The Chief Fire Marshal told us that although coordination has improved between his office and the CVC project team, he has not always had an opportunity to review project documentation early in the process and has not yet received the project schedule. As a result, he was uncertain whether the schedule provided enough time for his office to do its work. For example, as of July 8, he had not yet received documentation for the fire protection systems, which his office needs to examine before it can observe tests of these systems as the CVC team has already requested. The Fire Marshal Division will also be involved in fire alarm testing; the construction management contractor plans to assess the duration of this activity later after more detail is added to the schedule. In addition, at the time the construction management contractor performed its duration reassessment of East Front stonework, the project was experiencing difficulty getting stone deliveries on time. It is unclear to us how the duration of the stonework could have been determined to be reasonable given this problem and the lack of a clear resolution at the time. The May 2005 schedule includes a number of base project activities that could be completed after September 15, 2006, even though their completion would seem to be important for CVC to be open to the public. Such activities include installing security systems, kitchen equipment, and theater seating. According to the schedule, the late finish dates for these activities are after September 15. The late finish date is the latest date that an activity can be completed without delaying the scheduled completion date for the entire project. According to the construction management contractor, a number of activities in the schedule that are important to CVC’s opening were not linked to the September 15 opening date in the schedule. The contractor agreed to address this issue. Last week, we began to update our risk assessment of the project’s schedule and plan to have this update completed in September. AOC has also engaged a consultant to perform a risk assessment of the project’s schedule and expects the assessment to be done by mid-September. We believe that better information on the likelihood of AOC’s meeting its September 15, 2006, opening date will be available after our update and AOC’s schedule risk assessment are done. Aggressively monitoring and managing contractors’ adherence to the schedule, including documenting and addressing the causes of delays, and reporting accurately to Congress on the status of the project’s schedule. We noted in our May 17 testimony that neither AOC nor its construction management contractor had previously (1) adhered to contract provisions calling for monthly progress review meetings and schedule updates and revisions, (2) systematically tracked and documented delays and their causes as they occurred or apportioned their time and costs to the appropriate parties on an ongoing basis, and (3) always accurately reported on the status of the project’s schedule. On June 7 and July 8, AOC, its construction management contractor, the sequence 2 contractor, and AOC’s schedule consultant conducted the first and second monthly reviews of the schedule’s status using a newly developed approach that we discussed during the Subcommittee’s June 14 hearing. Additionally, on June 28, we met with AOC and its construction management contractor to discuss how delays are to be analyzed and documented in conjunction with the new approach to schedule management. During that meeting, AOC’s construction management contractor agreed to have its field supervisors document delays and their causes on an ongoing basis and its project control engineer summarize this information for discussion at the monthly schedule reviews. After assessing the new approach and observing the first two review sessions, we believe that, if effectively implemented and sustained, this approach should generally resolve the schedule management concerns we previously raised, including how delays will regularly be handled and how better information on the status of the project will be provided to Congress. As we indicated on June 14, we are encouraged by the construction management contractor’s addition of a full-time project control engineer to the project and have seen noteworthy improvements in schedule management since his arrival. Nevertheless, we plan to closely monitor the implementation of this new approach, including the resources devoted to it, the handling of delays, and the accuracy of the information provided to Congress. Developing and implementing risk mitigation plans. While monitoring the CVC project, we have identified a number of risks and uncertainties that could have significant adverse effects on the project’s schedule and costs. Some of these risks, such as underground obstructions and unforeseen conditions, have already materialized and have had the anticipated adverse effects. We believe the project continues to face risks and uncertainties, such as unforeseen conditions associated with the project’s remaining tunnels, the East Front, and other work; scope gaps or other problems associated with the segmentation of the project between two major contractors; and shortages in the supply of stone and skilled stone workers. As discussed during the Subcommittee’s June 14 hearing, AOC has not yet implemented our recommendations that it develop risk mitigation plans for these types of risks and uncertainties, but it has agreed to do so by mid-September. On July 1, AOC added assistance in risk mitigation to the scope of its contract with its schedule consultant. Preparing a master schedule that integrates the major steps needed to complete CVC construction and the steps necessary to prepare for operations. A number of activities, such as obtaining operators’ input into the final layouts of retail and food service areas, hiring and training staff, procuring supplies and services, and developing policies and procedures, need to be planned and carried out on time for CVC to open to the public when construction is complete. Although AOC has started to plan and prepare for CVC operations, as we indicated in our May 17 and June 14 testimonies, it has not yet developed a schedule that integrates the construction activities with the activities that are necessary to prepare for operations. The Subcommittee requested such a schedule during its April 13, 2005, hearing on AOC’s fiscal year 2006 budget request. Because it lacked funds, AOC had not been able to extend the work of a contractor that had been helping it plan and prepare for operations. During the week of June 6, AOC received authority to spend the funds needed to re-engage this contractor, and on June 30, AOC awarded a contract for the continued planning and preparation for CVC operations. Now that AOC has re-engaged its operations planning contractor, we believe that close coordination between AOC staff working with this contractor and the CVC project’s construction team will be especially important for at least two reasons. First, the operations planning contractor’s scope of work includes both the design of certain space within the CVC project and the wayfinding signs that are to be used within the project, and the timing and content of this work needs to be coordinated with CVC construction work. Second, about $7.8 million is available for either CVC construction or operations, and it will be important for AOC to balance the need for both types of funding to ensure optimal use of the funds. Moreover, it is not clear to us who in AOC will be specifically responsible for integrating the construction and operations schedules and for overseeing the use of the funds that are available for either construction or operations. As we said during the Subcommittee’s May 17 and June 14 hearings, we estimate that the cost to complete the construction of the CVC project, including proposed revisions to its scope, will range from about $522 million without provision for risks and uncertainties to about $559 million with provision for risks and uncertainties. As of July 11, 2005, about $483.7 million had been provided for CVC construction. In its fiscal year 2006 budget request, AOC asked Congress for an additional $36.9 million for CVC construction. AOC believes this amount will be sufficient to complete construction and, if approved, will bring the total funding provided for the project’s construction to $520.6 million. Adding $1.7 million to this amount for additional work related to the air filtration system that we believe will likely be necessary brings the total funding needed to slightly more than the previously cited $522 million. AOC believes that it could obtain this $1.7 million, if needed, from the Department of Defense, which provided the other funding for the air filtration system. AOC’s $36.9 million budget request includes $4.2 million for potential additions to the project’s scope (e.g., congressional seals, an orientation film, and storage space for backpacks) that Congress will have to consider when deciding on AOC’s fiscal year 2006 CVC budget request. AOC has not asked Congress for an additional $37 million (the difference between $559 million and $522 million) that we believe will likely be needed to address the risks and uncertainties that continue to face the project. These include, but are not limited to, shortages in the supply of stone, unforeseen conditions, scope gaps, further delays, possible additional requirements or time needed because of life safety or security changes or commissioning, unknown operator requirements, and contractor coordination issues. These types of problems have been occurring, and as of June 30, 2005, AOC had received proposed sequence 2 change orders whose costs AOC now estimates exceed the funding available in fiscal year 2005 for sequence 2 changes by about $1.3 million. AOC’s estimate of these change order costs has grown by about $900,000 during the past 4 weeks. AOC plans to cover part of this potential shortfall by requesting approval from the House and Senate Committees on Appropriations to reprogram funds that AOC does not believe will be needed for other project elements. At this time, AOC does not believe that it will need additional funds in fiscal year 2005, assuming it receives reprogramming authority for sequence 2 changes, unless it reaches agreement with the sequence 2 contractor on the costs associated with 10 months’ worth of delays that have already occurred. If AOC needs funds for this purpose or for other reasons, it can request approval from the Appropriations Committees to use part of the $10.6 million that Congress approved for transfer to the CVC project from funds appropriated for Capitol Buildings operations and maintenance. For several reasons, we believe that AOC may need additional funds for CVC construction in the next several months. These reasons include the pace at which AOC is receiving change order proposals for sequence 2 work, the problems AOC has encountered and is likely to encounter in finishing the project, the uncertainties associated with how much AOC may have to pay for sequence 2 delays, and uncertainty as to when AOC will have fiscal year 2006 funds available to it. For example, AOC is likely to incur additional costs for dehumidification or for additional workers to mitigate the expected delay in the utility tunnel. AOC may also incur more costs than it expects for certain activities, such as those necessary to support security during the remainder of the project’s construction. AOC may be able to meet these needs as well as the other already identified needs by obtaining approval to use some of the previously discussed $10.6 million and by additional reprogramming of funds. However, these funds may not be sufficient to address the risks and uncertainties that may materialize from later this fiscal year through fiscal year 2007. Thus, while AOC may not need all of the $37 million we have suggested be allowed for risks and uncertainties, we believe that, to complete the construction of CVC’s currently approved scope, AOC is likely to need more funds in fiscal years 2006 and 2007 than it has already received and has requested. Although the exact amount and timing of AOC’s needs are not clear, we believe that between $5 million and $15 million of this $37 million may be required in fiscal year 2006. Effective implementation of our recommendations, including risk mitigation, could reduce AOC’s funding needs. Since the Subcommittee’s June 14 hearing, three issues related to the project’s costs have emerged that we believe should be brought to your attention. Discussion of these issues follows. First, coordination within the CVC project team and between the team and AOC’s Fire Marshal Division has been an issue, especially with respect to the project’s fire protection systems. Although the CVC project team established biweekly meetings with Fire Marshal Division staff in March 2005 to enhance coordination, gaps in coordination have, as discussed, already led to uncertainty about whether enough time has been scheduled for fire alarm testing and for building occupancy inspections. Such gaps have also increased the costs associated with the fire protection system. For example, AOC recently took contractual action costing over $90,000 to redesign the mechanical system for the Jefferson Building connection to the Library of Congress tunnel to meet the Fire Marshal Division’s fire safety requirements. According to the Chief Fire Marshal, he was not given the opportunity to participate in the planning process before the design of the Jefferson Building connection was substantially completed. In addition, several fire-safety-related contract modifications and proposed change orders for additional work now total over $3.5 million. With better coordination between the CVC project team and the Fire Marshal Division, the need for some of this work might have been avoided or identified sooner, and had this work been identified during the original competition, the price would have been subject to competitive pressures that might have resulted in lower costs. Because of the fire protection system’s increasing costs, disagreements within the CVC team and between the team and the Fire Marshal Division over fire safety requirements, problems in scheduling fire safety activities, and other related issues, we suggested that AOC take appropriate steps to address the coordination of fire protection activities related to the CVC project. AOC agreed and has taken action. For example, starting this week, AOC’s Fire Marshal Division agreed to have a staff member work at the CVC site 2 days a week, and AOC CVC staff recently agreed to provide the necessary documentation to the Fire Marshal Division before its inspections or observations were needed. Second, as we indicated earlier in our testimony, we are concerned about the integration of planning, scheduling, and budgeting for CVC construction and operations. While the CVC project team has been overseeing CVC construction, other AOC staff have been assisting the operations planning contractor in planning and budgeting for CVC operations. Close coordination between the two groups will be especially important in the next few months, when decisions will likely have to be made on how to use the $7.8 million remaining from the $10.6 million that Congress made available to the CVC project for either operations or construction. The Architect of the Capitol agreed to give this issue priority attention. Finally, we are concerned that AOC may incur additional costs for interim measures, such as temporary walls that it may have to construct to open CVC to the public in September 2006. Such interim measures may be needed to make the project safe for visitors if some other construction work has not been completed. For example, AOC may have to do additional work to ensure adequate fire protection for CVC, since the House and Senate expansion spaces are not scheduled to be done until March 2007. In addition, AOC may have to accelerate some work to have it completed by September 15, 2006. While it is not necessarily unusual to use a facility for its intended purpose before all construction work is complete, we believe that it will be important for Congress to know what additional costs AOC expects to incur to open CVC by September 15, 2006, so that Congress can weigh the costs and benefits of opening the facility then rather than at a later date, such as March 2007, when AOC plans to complete the House and Senate expansion spaces. To ensure that (1) Congress has sufficient information for deciding when to open CVC to the public and (2) planning and budgeting for CVC construction and operations are appropriately integrated, we recommend that the Architect of the Capitol take the following two actions: In consultation with other appropriate congressional organizations, provide Congress with an estimate of the additional costs that it expects will be incurred to open CVC to the public by September 15, 2006, rather than later, such as after the completion of the House and Senate expansion spaces. Promptly designate who is responsible for integrating planning and budgeting for CVC construction and operations and give this activity priority attention. AOC agreed to take the actions we are recommending. According to AOC, information on the estimated costs of the additional work necessary to open CVC to the public in September 2006 may not be available until this fall. In addition, AOC said that the recent re-engagement of the contractor assisting AOC in planning for CVC operations and the hiring of an executive director for CVC, which AOC plans to do in the next few months, are critical steps for integrating CVC construction and operations. Mr. Chairman, this completes our prepared statement. We would be happy to answer any questions that you or other Subcommittee Members may have. For further information about this testimony, please contact Bernard Ungar at (202) 512-4232 or Terrell Dorn at (202) 512-6923. Other key contributors to this testimony include Shirley Abel, Maria Edelstein, Elizabeth Eisenstadt, Brett Fallavollita, Jeanette Franzel, Jackie Hamilton, Bradley James, Scott Riback, and Kris Trueblood. Scheduled for completion between 5/17/05 and 6/14/05 Wall Stone Area 3 Base Support Wall Stone Layout Area 4 Saw Cut Road at 1st Street Wall Stone Area 4 Base Support Wall Stone Layout Area 5 Masonry Wall Lower Level East Wall Stone Area 5 Base Support Wall Stone Layout Area 6 Drill/Set Soldier Piles at 1st Street Wall Stone Area 6 Base Support Scheduled for completion between 6/15/05 and 7/31/05 Wall Stone Layout Area 8 Wall Stone Layout Area 9 Wall Stone Area 9 Base Support Wall Stone Installation Area 2 Wall Stone Installation Area 3 Wall Stone Installation Area 4 Wall Stone Area 9 Base Concrete Working Slab 1st Street Waterproof Working Slab Station 0-1 Utility Tunnel 7/29/05 This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the Architect of the Capitol's (AOC) progress in achieving selected project milestones and in managing the Capitol Visitor Center (CVC) project's schedule since Congress's June 14 hearing on the project. We will also discuss the project's costs and funding, including the potential cost impact of schedule-related issues. Our observations today are based on our review of schedules and financial reports for the CVC project and related records maintained by AOC and its construction management contractor, Gilbane Building Company; our observations on the progress of work at the CVC construction site; and our discussions with AOC's Chief Fire Marshal and CVC project staff, including AOC, its major CVC contractors, and representatives of an AOC schedule consultant, McDonough Bolyard Peck (MBP). We did not perform an audit; rather, we performed our work to assist Congress in conducting its oversight activities. AOC and its major construction contractors have made progress on the project since Congress's June 14 hearing, but work on some of the selected milestones scheduled for completion by today's hearing is incomplete; some work has been postponed; and some new issues have arisen that could affect the project's progress. Largely because of past problems, remaining risks and uncertainties, and the number of activities that are not being completed on time, we continue to believe that the project is more likely to be completed in the December 2006 to March 2007 time frame than in September 2006. AOC and its construction management contractor have continued their efforts to respond to two recommendations we made to improve the project's management--having a realistic, acceptable schedule and aggressively monitoring and managing adherence to that schedule. However, we still have some concerns about the amount of time scheduled for some activities, the extent to which resources can be applied to meet dates in the schedule, the linkage of related activities in the schedule, and the integration of planning for completing construction and starting operations. Since Congress's last CVC hearing, AOC has engaged contractors to help it respond to two other recommendations we made--developing risk mitigation plans and preparing a master schedule that integrates the major steps needed to complete construction with the steps needed to prepare for operations. AOC has also been taking a number of actions to improve coordination between the CVC project team and AOC's Fire Marshal Division. Insufficient coordination in this area has already affected the project's schedule and cost, and could do so again if further improvements are not made. We continue to believe that the project's estimated cost at completion will be between $522 million and $559 million, and that, as we have previously indicated, AOC will likely need as much as $37 million more than it has requested to cover risks and uncertainties to complete the project. At this time, we believe that roughly $5 million to $15 million of this $37 million is likely to be needed in fiscal year 2006, and the remainder in fiscal year 2007. In the next 2 to 3 months, AOC plans to update its estimate of the project's remaining costs. We will review this estimate and provide Congress with our estimate together with information on when any additional funding is likely to be needed. During the next several months, AOC is likely to face competing demands for funds that can be used for either CVC construction or operations, and it will be important for AOC to ensure that the available funds are optimally used. Finally, we are concerned that AOC may incur costs to open the facility to the public in September 2006 that it would not incur if it postponed the opening until after the remaining construction work is more or fully complete--that is, in March 2007, according to AOC's estimates.
National default and foreclosure rates rose sharply from 2005 through 2009 to the highest level in 29 years (fig. 1). Default rates climbed from 1.09 percent to 5.09 percent, and foreclosure start rates—representing the percentage of loans that entered the foreclosure process each quarter— grew almost threefold, from 0.42 percent to 1.2 percent. Put another way, over half a million mortgages entered the foreclosure process in the fourth quarter of 2009, compared with about 174,000 in the fourth quarter of 2005. Finally, foreclosure inventory rates rose over 350 percent over the 4-year period, increasing from 0.99 percent to 4.58 percent, with most of that growth occurring after the second quarter of 2007. As a result, over 2 million loans were in the foreclosure inventory as of the end of 2009. Foreclosure starts declined in the last quarter of 2009, but the number of defaults continued to climb. Foreclosure is a legal process that a mortgage lender initiates against a homeowner who has missed a certain number of payments. The foreclosure process has several possible outcomes but generally means that the homeowner loses the property, typically because it is sold to repay the outstanding debt or repossessed by the lender. The foreclosure process is usually governed by state law and varies widely by state. Foreclosure processes generally fall into one of two categories—judicial foreclosures, which proceed through courts, and nonjudicial foreclosures, which do not involve court proceedings. The legal fees, foregone interest, property taxes, repayment of former homeowners’ delinquent obligations, and selling expenses can make foreclosure extremely costly to lenders. Options to avoid foreclosure include forbearance plans, short sales, deeds in lieu of foreclosure, and loan modifications. With forbearance plans and loan modifications, the borrower retains ownership of the property. With short sales and deeds in lieu of foreclosure, the borrower does not. In March 2009, Treasury issued the first HAMP guidelines for modifying first lien mortgages in an effort to help homeowners avoid foreclosure. The goal of the first-lien mortgage modification program is to reduce the monthly payments of struggling homeowners to more affordable levels— specifically 31 percent of household income. According to Treasury, HAMP was intended to offer reduced monthly payments to up to 3 to 4 million homeowners. Under the first-lien modification program, Treasury shares the cost of reducing the borrower’s monthly mortgage payments with mortgage holders/investors and provides various financial incentives to servicers, borrowers, and mortgage holders/investors for loans modified under the program for 5 years. To be eligible for a first-lien loan modification: the property must be owner occupied and the borrower’s primary residence; the property must be a single-family property (1 to 4 units) with a maximum unpaid principal balance on the unmodified first-lien mortgage that is equal to or less than $729,750 for a 1-unit property; the loan must have been originated on or before January 1, 2009; and the monthly first-lien mortgage payment must be more than 31 percent of the homeowner’s gross monthly income. Borrowers have until December 31, 2012, to be accepted into the first-lien modification program. HAMP also includes other subprograms that, for example, offer incentives to modify or pay off second-lien loans of borrowers whose first mortgages were modified under HAMP and to pursue foreclosure alternatives when a HAMP modification cannot be offered. The HAMP first-lien modification program has four main features: 1. Cost sharing – Mortgage holders/investors will be required to take the first loss in reducing the borrower’s monthly payments to no more than 38 percent of the borrower’s income. Treasury will then use TARP funds to match further reductions on a dollar-for-dollar basis, down to the target of 31 percent of the borrower’s gross monthly income. The modified monthly payment is fixed for 5 years or until the loan is paid off, whichever is earlier, as long as the borrower remains in good standing with the program. After 5 years, the payment may increase by 1 percent a year to a cap of the Freddie Mac rate for 30-year fixed rate loans as of the date that the modification agreement is prepared. 2. Standardized net present value (NPV) test – The NPV test compares expected cash flows from a modified loan to the same loan with no modification. If the expected cash flow with a modification is greater than the expected cash flow without a modification, the loan servicer is required to modify the loan. According to Treasury, the NPV test increases mortgage holder/investor confidence and helps ensure that borrowers are treated consistently under the program by providing a transparent and externally derived objective standard for all loan servicers to follow. 3. Standardized waterfall – Servicers must follow a sequential modification process to reduce payments to 31 percent of gross monthly income. Servicers must first capitalize accrued interest and expenses paid to third parties. Next, interest rates must be reduced to the higher of 2 percent or a level that achieves the 31 percent debt-to- income target. If the debt-to-income ratio is still over 31 percent, servicers must then extend the amortization period of the loan up to 40 years. Finally, if the debt-to-income ratio is still over 31 percent, the servicer must forbear—defer—principal until the payment is reduced to the 31 percent target. Servicers may also forgive mortgage principal at any step of the process to achieve the target monthly payment ratio of 31 percent. 4. Incentive payment structure – Treasury will use HAMP funds to provide both one-time and ongoing (“pay-for-success”) incentives to loan servicers, mortgage holders/investors, and borrowers to increase the likelihood that the program will produce successful modifications over the long term and help cover the servicers’ and investors’ costs of modifying a loan. Prior to HAMP, many servicers offered their own loan modification programs, but the vast majority of these loan modifications increased or did not change the borrower’s monthly mortgage payment. Rather, the focus of these programs was on bringing delinquent loans current by adding past due interest, advances for taxes or insurance, and other fees to the loan balance. Some of these loan modifications changed the interest rate or remaining term of the loan but typically focused on reducing payments to 38 rather than 31 percent of the borrower’s gross monthly income. For example, FDIC’s IndyMac Federal Bank loan modification program, on which HAMP is partially based, initially reduced payments to 38 percent of the borrower’s gross monthly income before subsequently revising the payment target to 31 percent. Many servicers continue to offer non-HAMP loan modifications for borrowers who do not qualify for HAMP. Appendix I provides examples of non-HAMP loan modification programs and an overview of other federal foreclosure prevention programs. Treasury first announced HAMP in February 2009 and issued the first implementation guidelines in March 2009. Since then, Treasury has issued 11 supplemental directives for the HAMP program, 8 of them for the first- lien modification program (fig. 2). The early supplemental directives tended to focus on basic implementation issues, but the later directives resulted in significant changes to the program—for example, requiring servicers to send written denial notices to borrowers, streamlining the process used by servicers for evaluating borrowers, and requiring that servicers verify borrowers’ income before initiating trial modifications. As of March 9, 2010, 113 servicers had signed HAMP Servicer Participation Agreements to modify loans not owned or guaranteed by the government sponsored enterprises (GSE) Fannie Mae and Freddie Mac. Roughly $36.9 billion in TARP funds have been allocated to these servicers for modification of non-GSE loans. These servicers include national financial institutions such as Bank of America, Wells Fargo, and JP Morgan Chase and national servicing organizations such as GMAC Mortgage and Ocwen. Fannie Mae and Freddie Mac required all servicers of loans that they owned or guaranteed to participate in the GSE HAMP program. Treasury reported that through February 2010 servicers had offered nearly 1.4 million HAMP trial modifications to borrowers of GSE and non-GSE loans, and roughly 1.1 million of these had begun HAMP trial modifications. Of the trial modifications begun, about 0.8 million were in active trial modifications, fewer than 0.2 million were in active permanent modifications, and the remaining had been canceled. As shown in figure 3, the number of trial modifications started generally increased until October 2009 but then decreased. In part, the decrease in new trial modifications may be the result of a shift in focus on the part of Treasury and the servicers from starting new modifications to making existing trial modifications permanent. In July 2009, Treasury announced a goal of 500,000 trial modifications started by November 1, 2009. In November, however, Treasury announced a campaign to increase the number of conversions to permanent modifications. Although the first trial modifications started nearly a year ago, servicers are completing permanent modifications at a rate slower than Treasury expected, with 32 percent of loans that have been in trial for 3 months or more approved for conversion. Servicers we spoke with cited several challenges in making trial modifications permanent, including obtaining all the required documentation and borrowers who missed trial period payments. To date, Treasury has reported limited information on the number of borrowers who have been denied trial modifications under HAMP. The 10 HAMP servicers that we spoke with reported a wide range of denial rates. The reasons for denying trial modifications varied by servicer—for example, one servicer reported high proportions of investors prohibiting HAMP modifications and another servicer reported insufficient or excessive borrower income as the most common reasons for denial. Additionally, Treasury has provided limited data on the performance of HAMP modifications, both trial and permanent. According to program administrators, servicers are not required to report trial period payments on a monthly basis, and these payments may not be reported until the trial modification becomes official. Thus, it is difficult to determine the number of borrowers in trial modifications who may be delinquent in their trial payments. Limited information is available on the performance of permanent modifications because few trials have become permanent. According to Treasury, through the end of February 2010, 1,473 of the 170,207 permanent modifications made had defaulted, and 26 had paid off their loans. HAMP payments are contingent upon trial modifications becoming permanent, and given the small number of permanent modifications to date, Treasury has made relatively few incentive payments to investors, servicers, and borrowers. According to Treasury, through the end of February 2010, a total of $58 million had been disbursed to servicers and investors. Roughly 78 percent of these payments went to servicers and 22 percent to investors. As of March 1, 2010, no incentive payments had been made on borrowers’ behalf because no borrowers had reached the first anniversary of their trial modification, as the program requires before making the incentive payment. Overall, non-GSE borrowers participating in HAMP had their mortgage interest rates on their loans reduced by approximately 5.5 percentage points (from 7.5 percent to 2.0 percent on average) and for nearly half of these borrowers had seen their loan terms extended to 40 years (an increase of 13 years beyond the original remaining term of the loan). To show the payments that Treasury might make for a typical modification, we developed an example of first-lien cost-sharing and incentive payments based on median loan and borrower characteristics of non-GSE borrowers entering trial modifications through February 17, 2010. For a borrower with a loan of about $222,000 who is paying 44 percent of his gross monthly income toward monthly housing payments, a HAMP modification would reduce the monthly housing payment by $520, from $1,760 to $1,240. Excluding the Home Price Decline Protection (HPDP) incentive, over 5 years Treasury would pay an investor $9,900 for the difference in mortgage payments and other incentives. A servicer would receive $4,500, and a borrower $5,000. In total, the borrower would receive $36,200 in the form of reduced payments and incentives. Appendix II elaborates on this example. In our July 2009 report on HAMP, we noted that Treasury’s projection that 3 to 4 million borrowers could be offered loan modifications was based on several uncertain assumptions and might be overly optimistic. Specifically, we reported that some of the key assumptions and calculations regarding the number of borrowers whose loans would be successfully modified under HAMP using TARP funds were necessarily based on limited analyses and data. According to Treasury, projections for the number of non-GSE borrowers who will participate in HAMP are updated quarterly through the revised allocation of TARP funds for HAMP servicers. Nonetheless, according to Treasury’s Web site, Treasury continues to expect that HAMP will offer reduced monthly payments to up to 3 to 4 million borrowers. We also reported that while HAMP is the cornerstone effort under TARP to meet the act’s goals of preserving homeownership and protecting home values, a number of HAMP programs remained largely undefined. Since that time, additional details of the HPDP incentives, second-lien modification program, and foreclosure alternatives program have been announced, but the number of homeowners who can be helped under these programs remains unclear. In July, we noted that Treasury had not estimated the number of additional modifications that would be made as a result of HPDP incentive payments, even though the potential exists for the incentive payments to use up to $10 billion in TARP funds. To date, Treasury has not prepared any such estimate. In addition, while Treasury has attempted to improve the targeting of these incentive payments by incorporating the size of the unpaid principal balance and the loan-to- value ratio in the payment calculations, HPDP incentives continue to be available for loans that would have passed the NPV test without them. Similarly, although the second-lien and foreclosure alternatives programs were included in the March 2009 program guidelines, no funds have yet been disbursed under either of these programs to date. According to Treasury, as of March 1—over a year after the first announcement of HAMP—details of the second-lien program had not yet been finalized, and only two servicers had signed an agreement to participate in the program. Finally, we reported in July that Treasury had not finalized a comprehensive system of internal control for HAMP. We noted that important parts of a comprehensive system of internal control include, among other things, implementing a system for determining compliance, having sufficient numbers of staffing with the right skills, and establishing and reviewing performance measures and indicators. According to Treasury, it was working with its financial agents to implement such a system and we continue to assess Treasury’s efforts in this area. While the Chief of the Homeownership Preservation Office (HPO)—the office within Treasury that is responsible for administering HAMP—consulted with staff and reduced staffing levels from 36 to 29 full-time positions, Treasury has not yet formally assessed whether HPO has staff with the skills needed to govern the program effectively. In addition, Treasury has not yet finalized remedies, or penalites, for servicers who are not in compliance with HAMP guidelines. According to Treasury, these remedies will be complete in April 2010 and a HAMP compliance committee has been established to review issues related to servicers’ compliance with program guidelines and to enforce appropriate remedies. Furthermore, while Treasury has put in place some performance metrics for HAMP, it has not developed benchmarks, or goals, to measure these metrics against, limiting its ability to determine the success of the program. We continue to assess Treasury’s efforts to establish a comprehensive system of internal control as part of our ongoing oversight of the implementation of TARP and our annual audit of TARP’s financial statements. Appendix III provides more detail on the recommendations we made in July and Treasury’s responses to them. The servicers we interviewed told us that a major challenge they faced in implementing the HAMP first-lien modification program was the number of changes to the program. Each major program change often required servicers to adjust their business practices, update their systems, and retrain their servicing staff. An example of a significant program change that servicers brought to our attention was Treasury’s recent requirement that borrowers fully document their income before they can be evaluated for a trial modification. According to servicers we contacted, Treasury told servicers in July 2009 that it was a “best practice” to use stated income information to evaluate borrowers for trial modifications in order to offer modifications more quickly. As a result, some servicers that had been requiring fully documented income before offering a trial modification switched to using stated income, a change that involved altering business processes, including updating company policies and retraining employees. However, as Treasury became concerned about the number of trial modifications that were not converting to permanent modifications due to difficulty obtaining income documentation from borrowers after the trial period began, Treasury subsequently reversed the policy. In January 2010, Treasury announced that effective June 1, 2010, servicers would be required to evaluate borrowers for trial modifications based on fully documented income. Servicers that switched to or had been using stated income will again have to alter their processes and policies to meet the new standards. Servicers also told us that the instability of Treasury’s NPV model presented another implementation challenge. Although the NPV test is a key element in evaluating borrowers for HAMP, servicers told us that they experienced problems accessing and using the NPV model on Treasury’s Web portal. According to Treasury, servicers were allowed to use their own NPV models until September 1, but some servicers told us that the lack of a Treasury model made it difficult for them to begin offering trial modifications. One servicer told us that in the first few months of the program, it was otherwise ready to start making trial modifications but it was unable to effectively use Treasury’s Web-based NPV model. As a result, it had to keep borrower applications on hold for several months. In addition, although one of HAMP’s goals is to create clear, consistent, and uniform guidance for loan modifications across the industry, we found inconsistencies and wide variations among the HAMP servicers that we contacted with respect to communication with borrowers about HAMP, the criteria used to evaluate borrowers for imminent default, and the tracking of HAMP complaints. Communications with borrowers – Although Treasury guidelines state that servicers must provide borrowers with information designed to help them understand the modification process and must respond to HAMP inquiries in a timely and appropriate manner, the HAMP servicers we contacted differed widely in the timeliness and content of their initial communications with borrowers about HAMP. For example, while some servicers contacted borrowers about HAMP as soon as payment was 30 days delinquent, other servicers did not inform borrowers about HAMP until payments were at least 60 days delinquent. Treasury has not developed standards to evaluate servicers’ performance in communicating with borrowers or penalties for servicers that do not meet Treasury’s requirements. We reviewed the Web sites of the 20 HAMP servicers with the largest program allocations and found that 3 did not provide any information about HAMP and that 3 others had posted inaccurate information about the program. The inaccuracies included statements implying that the program had not yet started and that only loans owned by Fannie Mae or Freddie Mac were eligible for HAMP. After we notified Treasury of these issues, two of the servicers updated their Web sites to include accurate program information. However, one continued to contain inaccurate information, and three continued to have minimal information about the program, but, according to Treasury, the level of information cannot be mandated. Criteria for imminent default – According to HAMP guidelines, borrowers in danger of imminently defaulting on their mortgages may be eligible for HAMP modifications. Although Treasury’s goal is to create uniform guidance for loan modifications across the industry, Treasury has not provided specific guidance on how to evaluate non-GSE borrowers for imminent default, leading to inconsistent practices among servicers. Among the 10 servicers we contacted, there were 7 different sets of criteria for determining imminent default. While some servicers do not impose any requirements beyond the basic HAMP eligibility criteria, others do. For example, four servicers aligned their imminent default criteria for their non-GSE portfolios with the imminent default criteria that the GSEs required for their loans prior to March 1, 2010. These criteria required borrowers to have cash reserves equal to less than 3 months’ worth of monthly housing payments and a ratio of disposable net income to monthly housing payments (debt coverage ratio) of less than 1.20. One servicer had begun using the new GSE criteria for its non-GSE loans, which impose a maximum cash reserves limit of $25,000 and have no debt coverage ratio requirement, for its non-GSE loans. In addition, four servicers implemented additional criteria for imminent default: including a sliding scale for the borrower’s front-end debt-to-income ratio (e.g., borrowers in the highest income category had to have a front-end debt-to-income ratio of at least 40 percent); an increase in expenses or decrease in income that is more than a certain percentage of income; a ratio of the remaining loan balance to the current house value that is above a certain percentage; and a “hardship” situation lasting more than 12 months. As a result of the differences in criteria used to assess imminent default, borrowers with the same financial situation and loan terms could be approved for a HAMP loan modification by one servicer and denied by another. Tracking of HAMP complaints – While Treasury has directed HAMP servicers to have procedures and systems in place to respond to HAMP inquiries and complaints and to ensure fair and timely resolutions, some servicers are not systematically tracking HAMP complaints or their resolutions. For example, according to Treasury a compliance review conducted by Freddie Mac in fall 2009 cited a servicer for not tracking, monitoring, or reporting HAMP-specific complaints. In the absence of an effective tracking system, the compliance agent could not determine whether the complaints had been resolved. Similarly, several of the servicers we interviewed indicated that they tracked resolutions only to certain types of complaints. For example, several servicers told us that they tracked only written HAMP complaints and that they handled these written complaints differently depending on the addressee. In one case, letters that were addressed to the president of the company were directed to an “escalation team” that tracked the resolution of the complaint, and required weekly updates to the borrower until the complaint was resolved. In comparison, complaint letters that were not addressed to a company executive were routed through a business unit without specific response time requirements. We have shared our preliminary observations about inconsistencies in servicers’ implementation of HAMP with Treasury so that these inconsistencies can be addressed in a timely manner. As we continue our work evaluating servicers’ implementation of the program, we plan to develop specific recommendations for Treasury as they are needed and appropriate to ensure that HAMP borrowers are treated consistently. While HAMP has offered some relief to over a million borrowers struggling to make their mortgage payments, the program may face several additional challenges going forward. These include problems converting trial to permanent modifications, the growing issue of negative equity, redefaults among borrowers with modifications, and program stability and management. Conversions – Treasury has taken some steps to address the challenge of converting trial modifications to permanent modifications, but conversions may continue to be an issue. During December 2009 and January 2010, Treasury held a HAMP Conversion Campaign to help borrowers who were in HAMP trial modifications convert to permanent modifications. This effort included a temporary review period lasting through January 31, which did not allow canceling trial modifications for any reason other than failure to meet HAMP property requirements and a requirement that the eight largest servicers submit conversion action plans. Since the announcement of the Conversion Campaign, the number of new conversions each month has increased from roughly 26,000 in November to roughly 35,000 in December and nearly 50,000 in January. However, as noted above, relatively few trial modifications have been made permanent. Negative Equity – As we reported in July 2009, HAMP may not address the growing number of foreclosures among borrowers with negative equity in their homes (so-called “underwater” borrowers). While HAMP’s overriding policy objective is to make mortgages more affordable for struggling homeowners, factors other than affordability may influence a borrower’s decision to default, including the degree to which the borrower is underwater. As we reported in July, many states with high foreclosure rates also have high proportions of mortgages with negative equity. To help address this issue, in February 2010 Treasury announced the Housing Finance Agency Innovation Fund for the Hardest-Hit Housing Markets program, which will allocate $1.5 billion in HAMP funds to five states that have suffered an average home price drop of at least 20 percent from the state’s price peak, based on a seasonally adjusted home price index. However, the details of this program and the extent to which it will be able to address defaults and foreclosures among this group of borrowers still remain to be seen. Redefaults – Some borrowers who receive a permanent HAMP modification are likely to redefault on their modified mortgages. Because few permanent modifications have been made to date, the redefault rate for HAMP remains to be seen, but HAMP alone may not address the needs of all borrowers. In particular, while HAMP lowers borrowers’ monthly first-lien payments to 31 percent of their gross monthly income, some borrowers may have high amounts of other debt, such as monthly payments on second mortgages or cars. These borrowers may have difficulty making even modified payments. In our July report, we noted that while Treasury requires borrowers with high levels of total debt to agree to obtained counseling, Treasury was not tracking whether borrowers obtain this counseling. We therefore recommended that Treasury consider methods of monitoring whether or not borrowers were obtaining the required counseling. Treasury officials told us that they considered methods of monitoring compliance but concluded that the processes would be too burdensome. As a result, it remains difficult to determine whether this program feature is likely to meet its purpose of reducing redefaults among high debt-burdened borrowers. We continue to believe that Treasury should seek cost-efficient methods to assess the extent to which the counseling requirement is reducing redefaults. Furthermore, the second-lien program, which could help reduce borrowers’ total debt, has yet to be fully specified and, to date, only two servicers have signed up for this program. Program Stability and Management – HAMP continues to undergo significant program changes, including the recently announced shift to upfront income verification and the implementation of the second-lien modification program, the foreclosure alternatives program, and the Hardest-Hit Housing Markets program. Treasury will be challenged to successfully implement these programs while also continuing to put in place the controls and resources needed to continue the first-lien modification program. Given the magnitude of the investment of public funds in HAMP and the fact that the program represents direct outlays of taxpayer dollars rather than investments that may yield a return (as in other TARP programs), it is imperative that Treasury continue to improve HAMP’s transparency and accountability. As we have noted, HAMP is Treasury’s cornerstone effort under TARP to meet the act’s purposes of preserving homeownership and protecting home values. As the number of delinquent loans and foreclosures continues to climb and home values continue to fall in many areas of the country, Treasury will need to ensure that borrowers receive consistent access to and treatment from servicers. Treasury also needs to make sure that it has the information, controls, and resources to successfully implement a still-developing program. We will continue to evaluate the implementation of HAMP as part of our ongoing oversight of the activities and performance of TARP. Mr. Chairman and Members of the Committee, I appreciate this opportunity to discuss this critically important program and would be happy to answer any questions that you may have. Thank you. For further information on this testimony, please contact Richard J. Hillman at (202) 512-8678 or hillmanr@gao.gov, Thomas J. McCool at (202) 512-2642 or mccoolt@gao.gov, or Mathew J. Scirè at (202) 512-8678 or sciremj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made major contributions to this statement are listed in appendix IV. Borrowers with loans owned or guaranteed by Fannie Mae or Freddie Mac can refinance into a fixed rate loan at the current market rate Eligible borrowers are current on their loans, the owner occupant of a one- to four-unit property, and have a loan-to-value ratio (LTV) of less than 125 percent Between February 2009 and February 2010, over 190,000 borrowers were refinanced through HARP Borrowers can refinance into an affordable loan insured by FHA Eligible borrowers are those who, among other factors, have a monthly mortgage debt-to-income ratio above 31 percent Servicers provided incentive payments; lenders required to write down the existing mortgage amount depending on the borrower’s monthly mortgage debt-to-income ratio and total household debt. Borrowers must agree to share the equity created at the beginning of their new Hope for Homeowners mortgage Between October 2008 and January 2010, 96 loans were refinanced under Hope for Homeowners Eligible borrowers can get monthly mortgage payments reduced to 31 percent of gross monthly income. Programs vary, but include modification programs aimed at reducing monthly payments. For example, one bank has a program to modify pay option adjustable rate mortgages. Another bank modifies loans to decrease monthly payments to between 31 and 40 percent of the borrower’s monthly gross income. Term (month) Interet rate (fixed) House value of $246,667. House value decreases 20%. Reduction in payment of 0%. Payment for Monthly Reduction (from % to 1%) Pay for Performance Success ($1,000/yr for 5 yrs) Pay for Success ($1,000/yr for yrs) Home Price Decline Protection (HPDP) Pay for Performance Success ($1,000/yr for 5 yrs) Invetore eligile to receive HPDP incentive pyment depending on where the property i locted. For thi exmple, if the tril modifiction were rted in Septemer 2009, invetore eligile for HPDP incentive tht rnge from $0 to $16,200. If the trirted dring Octoer, Novemer, nd Decemer 2009, the mont cold rnge from $0 to $10,800. If the trirted dring the firt 3 month of 2010, the incentive pyment cold asch as $5,880. According to Treasury, it considered options for monitoring what proportion of borrowers is obtaining counseling, but determined that it would be too burdensome to implement. Treasury does not plan to assess the effectiveness of counseling in limiting redefaults because it believes that the benefits of counseling on the performance of loan modifications is well documented and the assessment of the benefits to HAMP borrowers is not needed. Reevaluate the basis and design of the HPDP program to ensure that HAMP funds are being used efficiently to maximize the number of borrowers who are helped under HAMP and to maximize overall benefits of utilizing taxpayer dollars. On July 31, 2009, Treasury announced detailed guidance on HPDP that included changes to the program’s design that, according to Treasury, improve the targeting of incentive payments to mortgages that are at greater risk because of home price declines. Treasury does not plan to limit HPDP incentives to modifications that would otherwise not be made without the incentives, due to concerns about potential manipulation of inputs by servicers to maximize incentive payments and the additional burden of re-running the NPV test for many loans. Institute a system to routinely review and update key assumptions and projections about the housing market and the behavior of mortgage-holders, borrowers, and servicers that underlie Treasury’s projection of the number of borrowers whose loans are likely to be modified under HAMP and revise the projection as necessary in order to assess the program’s effectiveness and structure. According to Treasury, on a quarterly basis it is updating its projections on the number of non-GSE first-lien modifications expected when it revises the amount of TARP funds allocated to each servicer under HAMP. Treasury is gathering data on servicer performance in HAMP and housing market conditions in order to improve and build upon the assumptions underlying its projections about mortgage market behavior. Place a high priority on fully staffing vacant positions in the Homeownership Preservation Office (HPO)— including filling the position of Chief Homeownership Preservation Officer with a permanent placement—and evaluate HPO’s staffing levels and competencies to determine whether they are sufficient and appropriate to effectively fulfill its HAMP governance responsibilities. A permanent Chief Homeownership Preservation Officer was hired on November 9, 2009. According to Treasury, staffing levels for HPO have been revised from 36 full-time equivalent positions to 29. According to Treasury, as of March 2010, HPO had filled 27 of the total of 29 full-time positions. Expeditiously finalize a comprehensive system of internal control over HAMP, including policies, procedures, and guidance for program activities, to ensure that the interests of both the government and taxpayer are protected and that the program objectives and requirements are being met once loan modifications and incentive payments begin. According to Treasury, it will work with Fannie Mae and Freddie Mac to build and refine the internal controls within these financial agents’ operations as new program components are implemented. Treasury expects to finalize a list of remedies for servicers not in compliance with HAMP guidelines by April 2010. Expeditiously develop a means of systematically assessing servicers’ capacity to meet program requirements during program admission so that Treasury can understand and address any risks associated with individual servicers’ abilities to fulfill program requirements, including those related to data reporting and collection. According to Treasury, a servicer self-evaluation form, which provides information on the servicer’s capacity to implement HAMP, has been implemented beginning with servicers who started signing Servicer Participation Agreements in December 2009. In addition to the contacts named above, Lynda Downing, Harry Medina, John Karikari (Lead Assistant Directors); and Tania Calhoun, Emily Chalmers, Heather Latta, Rachel DeMarcus, Karine McClosky, Marc Molino, Mary Osorno, Winnie Tsen, and Jim Vitarello made important contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Mortgage loan defaults and foreclosures are key factors behind the current economic downturn. In response, Congress passed and the President signed the Emergency Economic Stabilization Act of 2008, which authorized the Department of the Treasury to establish the Troubled Asset Relief Program (TARP). Under TARP, Treasury created the Home Affordable Modification Program (HAMP) as its cornerstone effort to meet the act's goal of protecting home values and preserving homeownership. This statement focuses on (1) HAMP's program activities to date, (2) status of GAO's July 2009 recommendations to strengthen HAMP's transparency and accountability, (3) preliminary findings from GAO's current work evaluating servicers' implementation of HAMP, and (4) additional challenges HAMP faces going forward. GAO obtained information from 10 HAMP servicers of various sizes that accounted for 71 percent of the TARP funds allocated to participating servicers. GAO reviewed their policies and procedures, interviewed management and quality assurance staff, and observed a sample of phone calls between borrowers and servicers. GAO is also reviewing samples of loan files for borrowers offered and denied HAMP trial modifications. Finally, GAO spoke with officials at Treasury and its financial agents--Fannie Mae and Freddie Mac--and is analyzing program information and data from these sources. When Treasury announced the program in March 2009, it estimated that HAMP could help 3 to 4 million borrowers. Through February 2010, including both the portion funded by TARP and the portion funded by Fannie Mae and Freddie Mac: (1) about 1.1 million borrowers had begun trial modifications; of which (2) about 800,000 were in active trial modifications, and (3) fewer than 200,000 permanent modifications had been made. As of early March 2010, the TARP-funded portion of the program had 113 participating servicers, and about $36.9 billion of the $50 billion in TARP funds for HAMP had been allocated to these servicers. A typical TARP-funded modification could result in a monthly mortgage payment reduction of about $520. Treasury has taken some steps, but has not fully addressed concerns that GAO raised in its July 2009 report on HAMP's transparency and accountability. For example, Treasury has yet to finalize some key components of its internal controls over the first-lien program, including establishing metrics and benchmarks for servicers' performance. In addition, Treasury has not finalized remedial actions, or penalties, for servicers not in compliance with HAMP guidelines. According to Treasury, these remedies will be completed in April 2010. Lastly, GAO reported that Treasury's projection that 3 to 4 million borrowers could be helped by HAMP was based on several uncertain assumptions and might be overly optimistic, and GAO recommended that Treasury update this estimate, but the Department has not yet done so. Preliminary results of GAO's ongoing work show inconsistencies in some aspects of program implementation. Although one of HAMP's goals was to ensure that mortgage modifications were standardized, Treasury has not issued specific guidelines for all program areas, allowing inconsistencies in how servicers treat borrowers. For example, the 10 servicers GAO contacted had 7 different sets of criteria for determining whether borrowers who were not yet 60 days delinquent qualified for HAMP. Also, some servicers were not systematically tracking all HAMP complaints and, in some cases, tracked only resolutions to certain types of complaints, such as written complaints addressed to the company president. GAO also found that servicers faced challenges implementing HAMP because of the number of changes to the program, some of which have required servicers to readjust their business practices, update their systems, and retrain staff. HAMP is likely to face additional challenges going forward, including successfully converting trial modifications, addressing the needs of borrowers who have substantial negative equity, limiting redefaults for those who receive modifications, and achieving program stability. While GAO's study is not yet completed, GAO shared preliminary findings with Treasury to allow it to address these issues in a timely manner.
The radio frequency spectrum is the resource that makes possible wireless communication and supports a vast array of commercial and government services. Federal, state, and local agencies use spectrum to fulfill a variety of government missions, such as national defense, air- traffic control, weather forecasting, and public safety. DOD uses spectrum to transmit and receive critical voice and data communications involving military tactical radio, air combat training, precision-guided munitions, unmanned aerial systems, and aeronautical telemetry and satellite control, among others. The military employs these systems for training, testing, and combat operations throughout the world. Commercial entities use spectrum to provide a variety of wireless services, including mobile voice and data, paging, broadcast television and radio, and satellite services. In the United States, responsibility for spectrum management is divided between two agencies: FCC and NTIA. FCC manages spectrum for nonfederal users, including commercial, private, and state and local government users, under the Communications Act. NTIA manages spectrum for federal government users and acts for the President with respect to spectrum management issues as governed by the National Telecommunications and Information Administration Organization Act. FCC and NTIA manage the spectrum through a system of frequency allocation and assignment. Allocation involves segmenting the radio spectrum into bands of frequencies that are designated for use by particular types of radio services or classes of users. (Fig. 1 illustrates examples of allocated spectrum uses, including DOD systems using the 1755-1850 MHz band.) In addition, spectrum managers specify service rules, which include the technical and operating characteristics of equipment. Assignment, which occurs after spectrum has been allocated for particular types of services or classes of users, involves providing users, such as commercial entities or government agencies, with a license or authorization to use a specific portion of spectrum. FCC assigns licenses within frequency bands to commercial enterprises, state and local governments, and other entities. Since 1994, FCC has used competitive bidding, or auctions, to assign certain licenses to commercial entities for their use of spectrum. Auctions are a market- based mechanism in which FCC assigns a license to the entity that submits the highest bids for specific bands of spectrum. NTIA authorizes spectrum use through frequency assignments to federal agencies. More than 60 federal agencies and departments combined have over 240,000 frequency assignments across all spectrum bands, although 9 departments, including DOD, hold 94 percent of all frequency assignments for federal use. Pub. L. No. 103-66, § 6001,107 Stat. 312 (1993) (OBRA-93) amended by Pub. L. No. 105-33, § 3002, 111 Stat. 251(1997) (BBA-97), codified as amended at 47 U.S.C. § 923. enforcement purposes, may not be compatible with commercial technology, and therefore agencies have to work with vendors to develop equipment that meets mission needs and operational requirements. In 2004, the Commercial Spectrum Enhancement Act (CSEA) established a Spectrum Relocation Fund, funded from auction proceeds, to cover the costs incurred by federal entities that relocate to new frequency assignments or transition to alternative technologies. OMB administers the Spectrum Relocation Fund in consultation with NTIA. CSEA streamlined the process by which federal agencies are reimbursed for relocation costs and requires FCC to notify NTIA at least 18 months in advance of beginning an auction of new licenses of spectrum identified for reallocation from federal to nonfederal use. It also requires NTIA to provide estimated cost and transition timing data to FCC, Congress, and GAO at least 6 months prior to the auction, and requires that auctions recover at least 110 percent of these estimated costs. CSEA was amended by the Middle Class Tax Relief and Job Creation Act of 2012, further easing relocation by (1) allowing agencies to use some of the funding for advance planning and system upgrades, (2) extending the reimbursement scheme to sharing as well as relocation expenses, and (3) requiring agencies to submit transition plans for relocation (or sharing) for interagency management review of the costs and timelines associated with the relocation. The auction of spectrum licenses in the 1710-1755 MHz band was the first with relocation costs to take place under CSEA. CSEA designated 1710-1755 MHz as “eligible frequencies” for which federal relocation costs could be paid from the Spectrum Relocation Fund, which is funded Twelve federal agencies by the proceeds from the auction of the band.previously operated communications systems in this band, including DOD. NTIA and FCC jointly reallocated the 1710-1755 MHz band for nonfederal use, and FCC designated the spectrum for Advanced Wireless Services (AWS). In September 2006, FCC concluded the AWS-1 auction of licenses in the 1710-1755 MHz band. In accordance with CSEA, a portion of the auction proceeds associated with the 1710-1755 MHz band is currently being used to pay spectrum relocation expenses. In addition to the 1710-1755 MHz band, the wireless industry has expressed interest in the 1755-1850 MHz band, largely because the band offers excellent radio wave propagation, enabling mobile communication links. The federal government has studied the feasibility of relocating federal agencies from the 1755-1850 MHz band on several occasions. For example, in March 2001, NTIA issued a report examining the potential to accommodate mobile wireless services in the broader 1710- 1850 MHz band. The report was largely based on input from other federal agencies, including a DOD study. NTIA found that unrestricted sharing of the 1755-1850 MHz band was not feasible and that considerable coordination between industry and DOD would be required before any wireless systems could operate alongside federal systems in the band. In August 2001, we also found that more analysis was needed to support spectrum use decisions in the 1755-1850 MHz band, largely because major considerations either were not addressed or were not adequately addressed in DOD’s study. These considerations included complete technical and operation analyses of anticipated spectrum interference; cost estimates supporting DOD reimbursement claims; spectrum requirements supporting future military operations; programmatic, budgeting, and schedule decisions needed to guide analyses of alternatives; and potential effects of U.S. reallocation decisions upon international agreements and operations. At the end, a decision was made to reallocate just the 1710-1755 MHz band to minimize the impact on federal capabilities. Activity surrounding the rest of the band (i.e., the 1755-1850 MHz band) did not resurface until October 2010 when NTIA’s Fast Track study identified the band for possible reallocation. In June 2010, the administration issued a presidential memorandum titled “Unleashing the Wireless Broadband Revolution” directing NTIA to collaborate with FCC to make a total of 500 MHz of federal and nonfederal spectrum available for wireless broadband within 10 years. Responding to the President’s initiative, in October 2010, NTIA published a plan and timetable to make available 500 MHz of spectrum for wireless broadband. This plan and timetable specified that candidate bands would be prioritized for detailed evaluation to determine the feasibility of vacating the bands to accommodate wireless services. In January 2011, NTIA selected the 1755-1850 MHz band as the priority band for detailed evaluation for relocation. DOD and other affected agencies provided NTIA their input on the spectrum feasibility study for the 1755-1850 MHz band, and NTIA subsequently issued its assessment of the viability for accommodating commercial wireless broadband in the band in March 2012. Most recently, the President’s Council of Advisors on Science and Technology published a report in July 2012 recommending specific steps to ensure the successful implementation of the President’s 2010 memorandum. The report found, for example, that clearing and vacating federal users from certain bands was not a sustainable basis for spectrum policy largely because of the high cost to relocate federal agencies and disruption to federal missions. The report recommended new policies to promote the sharing of federal spectrum. The sharing approach has been questioned by CTIA–The Wireless Association and its members, which argue that cleared spectrum and an exclusive-use approach to spectrum management has enabled the U.S. wireless industry to invest hundreds of billions of dollars to deploy mobile broadband networks resulting in economic benefits for consumers and businesses. Actual costs to relocate communications systems for 12 federal agencies from the 1710-1755 MHz band have exceeded original estimates by about $474 million, or 47 percent, as of March 2013. Table 1 compares estimated relocation costs with the actual costs based on funds transferred to federal agencies in support of the 1710-1755 MHz band relocation effort. OMB and NTIA officials expect the final relocation cost to be about $1.5 billion compared with the original estimate of about $1 billion. In addition, NTIA expects agencies to complete the relocation effort between 2013 and 2017. The original transfers from the Spectrum Relocation Fund to agency accounts were made in March 2007. Subsequently, some agencies requested additional monies from the Spectrum Relocation Fund to cover relocation expenses. Agencies requesting the largest amounts of subsequent transfers include the Department of Justice ($294 million), the Department of Homeland Security ($192 million), and the Department of Energy ($35 million). Total actual costs for the 1710-1755 MHz transition exceeded estimated costs, as reported to Congress in 2007, for many reasons, including: Unforeseen challenges: Agencies encountered various unforeseen challenges when relocating systems out of the 1710-1755 MHz band. For example, according to NTIA officials, one agency needed to upgrade its radio towers to comply with new standards adopted after the towers were built. The agency requested additional monies from the Spectrum Relocation Fund to cover the cost of upgrading its towers, which had not been part of the agency’s original relocation estimate. Unique issues posed by specific equipment location: According to NTIA, some federal government communications systems are located in remote areas. One agency requested additional monies from the Spectrum Relocation Fund to use a helicopter to replace a fixed microwave system located on a mountain-top, which exceeded its original cost estimate. Administrative issues associated with transition time frame: NTIA officials told us that some agencies experienced higher than expected labor costs during the transition period, partly to accommodate auction winners’ requests to vacate the spectrum as quickly as possible. Costs associated with achieving comparable capability: Some communications systems are unique to federal agencies, making them difficult to upgrade or relocate. In some instances, agencies were using analog radio systems throughout the 1710-1755 MHz band and the digital technology needed to achieve comparable capability was not available prior to vacating the band. When the technology did become available, some agencies found they needed additional funds to procure it, according to OMB officials. For example, we previously reported that the Department of Justice requested funds exceeding its estimate to develop new technology that would operate using the new spectrum and match its current capabilities. Some agencies might not have followed guidance: Some agencies may not have properly followed OMB and NTIA guidance in preparing their original cost estimates. For instance, Immigration and Customs Enforcement (ICE) did not detail its estimated costs by equipment, location, systems, or frequency as suggested by NTIA’s guidance. Instead, the agency provided a lump sum estimate for its spectrum relocation costs. We previously reported that ICE officials did not identify a significant number of relocation expenses in the agency’s original transfer request, including costs associated with additional equipment, offices, and systems, among other items. Moreover, according to OMB staff, the agency’s initial estimate was based on an inadequate inventory of deployed systems. To date, the Department of the Navy has initiated the process to return about $65 million to the Spectrum Relocation Fund, as its relocation costs may end up being less than expected. The Department of the Navy is still in the process of finalizing relocation of its systems, and the exact amount of any money that may be returned will not be known until the relocation is complete. re-tune fixed microwave systems from the 1710-1755 MHz band into the adjacent 1755-1850 MHz band, and it assumed exclusion zones— geographic areas where commercial licensees could not operate—around 16 DOD sites to prevent interference from commercial users. DOD also estimated a cost of an additional $100 million if precision guided munitions operations needed to be relocated from the 1755-1850 MHz band. Subsequently, in 2001, NTIA reported additional cost estimates reflecting several other options under consideration. One option, which was not evaluated by DOD, included a preliminary cost figure of $1.6 billion. This estimate was based on eliminating some of the 16 exclusion zones around DOD sites and, therefore, relocating additional systems that were not included in the original estimate of $38-138 million, according to NTIA. In December 2006, NTIA reported that DOD’s estimate to relocate systems would be about $355.4 million. This estimate reflected a new set of assumptions, such as maintaining exclusion zones at 2 of the 16 DOD sites and relocating fixed microwave systems to the 1755-1850 MHz portion of the band or to other federal bands. Both NTIA and OMB are taking steps to ensure that agencies improve their cost estimates for a future relocation from the 1755-1850 MHz band. For example, according to NTIA and OMB officials, the agencies prepared a cost estimation template and guidelines for reimbursable costs as part of the process to estimate relocation costs for the 1755- 1850 MHz band. The Middle Class Tax Relief and Job Creation Act of 2012 expanded the types of costs for which federal agencies can receive payments from the Spectrum Relocation Fund. The act permits agencies to receive funds for costs associated with planning for FCC auctions and studies or analyses conducted in connection with relocation or sharing of spectrum, including coordination with auction winners. In November 2012, OMB issued guidance to federal agencies to clarify allowable pre- auction costs and other requirements that are eligible to receive payments from the Spectrum Relocation Fund.stated that they are optimistic that by providing pre-auction planning funds to agencies, future cost estimates will improve. The Advanced Wireless Services auction of the 1710-1755 MHz band raised almost $6.9 billion in gross winning bids from the sale of licenses to use these frequencies.actual relocation costs suggests that the auction of the 1710-1755 MHz band raised $5.4 billion for the U.S. Treasury. This number reflects the difference between the $6.9 billion auction revenue and the approximately $1.5 billion estimated final federal relocation cost. As mentioned above, NTIA reports that it expects agencies to complete the relocation effort between 2013 and 2017; therefore the final net revenue amount may change. For example, some agencies have returned or plan to return excess relocation funds to the Spectrum Relocation Fund. To prepare the preliminary cost estimate portion of its study to determine the feasibility of relocating DOD’s 11 major radio systems from the 1755- 1850 MHz band, DOD officials said the agency implemented the following methodology: DOD’s Cost Assessment and Program Evaluation (CAPE) groupthe effort and provided guidance to management at the respective military services regarding the data needed to support each system’s relocation cost estimate and how they should be gathered to maintain consistency across the services. The guidance used by CAPE was based on guidance and assumptions provided by NTIA. Certified cost estimators at each of the services’ Cost Centers worked closely with the various program offices to collect the necessary technical and cost data. The cost estimators compiled and reviewed the program data, identified the appropriate program content affected by each system’s relocation, developed cost estimates under the given constraints and assumptions, and internally reviewed the estimates consistent with their standard practices before providing them to CAPE to include in the overall estimate. CAPE staff reviewed the services’ estimates to ensure they adhered to the provided guidelines for accuracy and consistency, and obtained DOD management approval on its practices and findings. According to DOD officials, CAPE based this methodology on the cost estimation best practices it customarily employs, revising those practices to suit the study requirements as outlined by NTIA. We reviewed DOD’s preliminary cost estimation methodology and evaluated it against GAO’s Cost Estimating and Assessment Guide (Cost Guide), which also identifies cost estimating best practices, including those used throughout the federal government and industry. The best practices identified in the Cost Guide help ensure that cost estimates are comprehensive, well-documented, accurate, and credible. These characteristics of cost estimates help minimize the risk of cost overruns, missed deadlines, and unmet performance targets: A comprehensive cost estimate ensures that costs are neither omitted nor double counted. A well-documented estimate is thoroughly documented, including source data and significance, clearly detailed calculations and results, and explanations for choosing a particular method or reference. An accurate cost estimate is unbiased, not overly conservative or overly optimistic, and based on an assessment of most likely costs. A credible estimate discusses any limitations of the analysis from uncertainty or biases surrounding data or assumptions. When applying GAO’s identified best practices to DOD’s methodology, we took into account that DOD officials developed the preliminary cost estimate for relocation as a less rigorous, “rough order of magnitude” cost estimate, not a budget-quality cost estimate. The nature of a rough- order-of-magnitude estimate means that it is not as robust as a detailed, budget quality life-cycle estimate and its results should not be considered or used with the same level of confidence. Because of this, we performed a high-level analysis of DOD’s preliminary cost estimate and methodology, and did not review all supporting data and analysis. When we reviewed DOD’s preliminary cost estimation methodology and evaluated it against the Cost Guide’s best practices, we found that DOD’s methodology substantially met the comprehensive and well-documented characteristics of reliable cost estimates, and partially met the accurate and credible characteristics, as shown in table 2. Overall, we found that DOD’s cost estimate was consistent with the purpose of the feasibility study, which was to inform the decision making process to reallocate 500 MHz of spectrum for commercial wireless broadband use. Additionally, we found that DOD’s preliminary cost- estimation methodology substantially met both the comprehensive and well-documented characteristics. As noted in the table above, we observed that DOD’s estimate included complete information about systems’ life cycles and was generally well-documented. However, these characteristics were not fully met because we found that information on the tasks required to relocate some systems was incomplete, and that documentation for some programs was not sufficient to support a rough- order-of-magnitude estimate. We also determined that DOD’s preliminary cost-estimation methodology partially met the accurate and credible characteristics. We found that DOD properly applied appropriate inflation rates and made no apparent calculation errors, and that the estimated costs agree with DOD’s prior relocation cost estimate for this band conducted in 2001. However, DOD did not fully or substantially meet the accurate and credible characteristics because it was not clear if the estimate considered the most likely costs and because some sensitivity analyses and risk assessments were only completed at the program level for some programs, and not at all at the summary level. Even though DOD’s preliminary cost estimate substantially met some of our best practices, as the assumptions supporting the estimate change over time, costs may also change. According to DOD officials, any change to key assumptions about the bands to which systems would move and the relocation start date could substantially change relocation costs. Because decisions about the spectrum bands to which the various systems would be reassigned and the time frame for relocation have not been made yet, DOD based its current estimate on the most likely assumptions, provided by NTIA, some of which have already been proven inaccurate or are still undetermined. For example: Relocation bands: Decisions about which comparable or alternate spectrum bands federal agencies, including DOD, should relocate to are still unresolved. According to DOD officials, equipment relocation costs vary significantly depending on the relocation band’s proximity to the current band. Moving to bands further away than the assumed relocation bands could increase costs relative to moving to closer bands with similar technical characteristics. In addition, congestion, in both the 1755-1850 MHz band and some of the potential alternate spectrum bands to which federal systems might be moved, complicates relocation planning. According to DOD officials, many of the federal radio systems relocated from the 1710-1755 MHz band were simply re-tuned or compressed into the 1755-1850 MHz band, adding to the complexity of systems and equipment requiring relocation from this band since 2001. Also, DOD officials said that some of the potential spectrum bands to which DOD’s systems could be relocated are themselves either already congested or the systems are incompatible unless other actions are also taken. For example, cost estimates for several of DOD’s systems assumed that these systems would be relocated into the 2025-2110 MHz band, and operate within this band on a primary basis. However, this band is currently allocated to commercial electronic news gathering systems and other commercial and federal systems, and while the band is not currently congested, it does not support compatible coexistence between DOD systems and commercial electronic news gathering systems. To accommodate military systems within this band, FCC would need to withdraw this spectrum from commercial use to allow NTIA to provide DOD primary status within this band, or FCC would have to otherwise ensure that commercial systems operate on a non- interference basis with military systems. FCC has not initiated a rulemaking procedure to begin such processes. Relocation start date: DOD’s cost estimate assumed relocation would begin in fiscal year 2013, but no auction has been approved, so relocation efforts have not begun. According to DOD officials, a change in the start date creates uncertainty in the cost estimate because new equipment and systems continue to be deployed in and designed for this band, and older systems are retired. This changes the overall profile of systems in the band, a change that can alter the costs of relocation. For example, a major driver of the cost increase between DOD’s 2001 and 2011 relocation estimates for the 1755- 1850 MHz band was the large increase in the use of the band, including unmanned aerial systems. DOD deployed these systems very little in 2001, but their numbers had increased substantially by 2011. Conversely, equipment near the end of its life cycle when DOD’s 2011 relocation cost estimate was completed may be retired or replaced outside of relocation efforts, which could decrease relocation costs. Inflation: DOD appropriately used 2012 inflation figures in its estimate, assuming that relocation would begin in fiscal year 2013. As more time elapses before the auction occurs, the effect of inflation will increase the relocation costs each year. According to DOD, the preliminary cost estimate is not as robust as a detailed, budget-quality lifecycle estimate. A budget-quality estimate is based on more fully formed assumptions for specific programs. DOD officials said that for a spectrum relocation effort, a detailed, budget- quality cost estimate would normally be done during the transition- planning phase once a spectrum auction has been approved and would be based on the requirements for the specific auction and relocation decisions. No official government revenue forecast has been prepared for a potential auction of 1755-1850 MHz band licenses, but some estimates might be prepared once there is a greater likelihood of an auction. Officials we spoke with at CBO, FCC, NTIA, and OMB confirmed that none of these agencies has produced a revenue forecast thus far. Officials at these agencies knowledgeable about estimating spectrum-license auction revenue said that because the value of licensed spectrum varies greatly over time and the information on factors that might influence the spectrum auction revenues is not yet available, it is too early to produce meaningful forecasts for a potential auction of the 1755-1850 MHz band. Moreover, CBO only provides written estimates of potential receipts when a congressional committee reports legislation invoking FCC auctions. OMB would estimate receipts and relocation costs as part of the President’s Budget; OMB analysts would use relocation cost information from NTIA to complete OMB’s estimate of receipts. The potential for large differences between CBO and OMB forecasts exist, as well. For example, in the past, CBO and OMB have produced very different estimates of potential FCC auction receipts at approximately the same time with access to the same data, underscoring how differing assumptions can lead to different results. Although no official government revenue forecast exists, an economist with the Brattle Group, an economic consulting firm, published a revenue forecast in 2011 for a potential auction of the 1755-1850 MHz band that We did not evaluate forecasted revenues of $19.4 billion for the band. the accuracy of this revenue estimate. Like all forecasts, the Brattle Group study was based on certain assumptions. For example, it assumed that the band would generally be cleared of federal users. It also assumed the AWS-1 average nationwide price of $1.03 per “MHz-pop” as a baseline price for spectrum allocated to wireless broadband services.In addition, the study adjusts the price of spectrum based on the following considerations: Increase in the quantity of spectrum using elasticity of demand. As the supply of spectrum for commercial wireless broadband services increases, the price and value of spectrum is expected to fall. The elasticity of demand is used to make adjustments for the increased supply of spectrum. Differences in capacity and quality of spectrum using value weights. According to the study, wireless broadband spectrum is generally thought to have a price elasticity of around -1.2, which implies that a 1 percent increase in the base supply of spectrum should result in a 1.2 percent decrease in its price. because traditional, two-way communications, such as mobile phone services, are typically provided over paired bands of spectrum. Similarly, a greater value weight is given to bands of spectrum with no restrictions on use, or encumbrances. Fewer restrictions would increase the capacity or the types of services for a given spectrum band. The study also assumes that the 1755-1780 MHz portion of the band is paired with the 2155-2180 MHz band, which various industry stakeholders currently support. For spectrum services that require two- way communications, pairing bands allows them to be used more efficiently by diminishing interference from incompatible adjacent operations. In addition, the study assumed the 95 MHz of spectrum between 1755 and 1850 MHz would be auctioned as part of a total of 470 MHz of spectrum included in six auctions sequenced 18 months apart and spread over 9 years with total net receipts of $64 billion. Thus, the forecast also took into account when the spectrum would be reallocated for commercial services. Like all goods, the price of licensed spectrum, and ultimately the auction revenue, is determined by supply and demand. This fundamental economic concept helps to explain how the price of licensed spectrum could change depending on how much spectrum is available now and in the future, and how much licensed spectrum is demanded by the wireless industry for broadband applications. Government agencies can influence the supply of spectrum available for licensing and the characteristics of those licenses, whereas expectations about profitability determine demand for spectrum in the marketplace. Supply. FCC and NTIA, with direction from Congress and the President, jointly influence the amount of spectrum allocated for federal and nonfederal users, including the amount to be shared by federal and nonfederal users. In 2010, the President directed NTIA to work with FCC to make 500 MHz of spectrum available for use by commercial broadband services within 10 years. This represents a significant increase in the supply of spectrum available for licensing in the marketplace. As with all economic goods, with all other things being equal, the price and value of spectrum licenses are expected to fall as additional supply is introduced. However, at this time, the answers to key questions about the reallocation of the 1755-1850 MHz band are unknown. Expectations about exactly how much spectrum is available for licensing now and how much will be available in the future would influence how much wireless companies would be willing to pay for spectrum licensed today. Demand. The expected, potential profitability of a spectrum license influences the level of demand for it. As with all assets, companies base their capital investment decisions on the expected net return, or profit, over time of their use. The same holds true for spectrum. Currently, the demand for licensed spectrum is increasing, and a primary driver of this increased demand is the significant growth in the use of commercial- wireless broadband services, including third and fourth generation technologies that are increasingly used for smart phones and tablet computers. Below are some of the factors that would influence the demand for licensed spectrum: Clearing versus Sharing: Spectrum is more valuable, and companies will pay more to license it, if it is entirely cleared of incumbent federal users, giving them sole use of licensed spectrum; spectrum licenses are less valuable if access must be shared. Sharing could potentially have a big impact on the price of spectrum licenses, especially if a sharing agreement does not guarantee service when the licensee would need it most. For example, knowing in advance that service would be unavailable once a month at 3 a.m. may not significantly influence price, but if the times when the service will be unavailable are unknown, the effect on price could be significant. In 2012, the President’s Council of Advisors on Science and Technology advocated that sharing between federal and commercial users become the new norm for spectrum management, especially given the high cost and lengthy time it takes to relocate federal users and the disruptions to agencies’ missions. Certainty and Timing: Another factor that affects the value of licensed spectrum is the certainty about when it becomes available. Seven years after the auction of the 1710-1755 MHz band, federal agencies are still relocating systems. According to an economist with whom we met, one lesson from the 1710-1755 MHz relocation effort is that uncertainty about the time frame for availability reduces the value of the spectrum. Any increase in the probability that the spectrum would not be cleared on time would have a negative impact on the price companies are willing to pay to use it. As such, the estimated 10-year timeframe to clear federal users from the entire 1755-1850 MHz band, and potential uncertainty around that time frame, could negatively influence demand for the spectrum. The 2012 amendments to the CSEA include changes designed to reduce this uncertainty by requiring federal agencies that will be relocating (or sharing spectrum) to submit transition plans with timelines for specific geographic locations, with interagency review of those plans aimed at ensuring timely relocation (or sharing) arrangements. Available Wireless Services: Innovation in the wireless broadband market is expected to continue to drive demand for wireless services. For example, demand continues to increase for smart phones and tablets as new services are introduced in the marketplace. These devices can connect to the Internet through regular cellular service using commercial spectrum, or they can use publicly available (unlicensed) spectrum via Wi-Fi networks to access the Internet. The value of the spectrum, therefore, is determined by continued strong development of and demand for wireless services and devices, and the profits that can be realized from them. We provided a draft of this report to the Department of Commerce (Commerce), DOD, FCC, and OMB for review and comment. FCC agreed with the report’s findings, and Commerce, DOD, and FCC provided technical comments that we incorporated as appropriate. FCC’s written comments appear in appendix II. OMB did not provide comments. We are sending copies of this report to the Secretary of Commerce, the Secretary of Defense, the Chairman of the Federal Communications Commission, the Director of the Office of Management and Budget, and the appropriate congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or members of your staff have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III. The objectives of this report were to examine (1) the differences, if any, between estimated and actual federal relocation costs and auction revenues from the 1710-1755 MHz band; (2) the extent to which the Department of Defense (DOD) followed best practices to prepare its preliminary cost estimate for vacating the 1755-1850 MHz band, and any limitations of its analysis; and (3) what government or industry revenue forecasts for the 1755-1850 MHz band auction exist, if any, and what factors, if any, could influence actual auction revenue. To examine the differences, if any, between estimated and actual federal relocation costs and auction revenues from the 1710-1755 MHz band, we reviewed spectrum auction data published by the Federal Communications Commission (FCC) and federal relocation cost data from the National Telecommunication and Information Administration’s (NTIA) annual 1710-1755 MHz band relocation progress reports, published yearly since 2008. We narrowed our review of past spectrum auctions to the 1710-1755 MHz relocation after reviewing FCC auction data and NTIA reports describing other spectrum relocations and auctions involving federal agencies, and interviews with knowledgeable FCC, NTIA, Office of Management and Budget (OMB), and Congressional Budget Office (CBO) officials. The Advanced Wireless Services-1 (AWS- 1) auction involving the 1710-1755 MHz band is the only spectrum auction involving federal agencies with significant, known relocation costs. In addition, it is the only relocation involving DOD radio communication systems. To assess the reliability of FCC auction and NTIA relocation cost data, we reviewed documentation related to the data; compared it to other sources, including other government reports; and discussed the data with FCC and NTIA officials. We did not evaluate the accuracy of individual agencies’ relocation cost data, as this was outside the scope of our review. Based on this review, we determined that the FCC and NTIA data were sufficiently reliable for the purposes of our report. To determine the extent to which DOD followed best practices to prepare its preliminary cost estimate for vacating the 1755-1850 MHz band, we assessed DOD’s preliminary cost estimate against the best practices in GAO’s Cost Estimating and Assessment Guide (Cost Guide), which has been used to evaluate cost estimates across the government. These best practices help ensure cost estimates compiled at different stages in the cost estimating process are comprehensive, well-documented accurate, and credible. To develop our assessment, we interviewed DOD officials, including in the agency’s Cost Assessment and Program Evaluation (CAPE) group that led the cost estimation effort, regarding their data collection and cost estimation methodologies and the findings reported in DOD’s feasibility study. We also reviewed electronic source documentation supporting the estimate with a CAPE official. After completing this review, a GAO cost analyst developed an assessment using our 5-point scale (not met, minimally met, partially met, substantially met, and met) and a second analyst verified the assessment. DOD’s preliminary cost estimate was a rough-order-of-magnitude estimate; consequently, it did not contain all the information expected of a complete, budget-quality cost estimate. Therefore, we performed a high- level analysis to determine whether DOD’s reported estimated costs considered all the potential factors that could influence those relocation costs. To identify any limitations affecting DOD’s estimate, we interviewed DOD officials responsible for developing the department’s preliminary cost estimate. We also interviewed NTIA and OMB officials knowledgeable about the intended purpose of the estimate to discuss how the estimate should be used and any factors that would affect the reliability of the estimate. Bazelon, Expected Receipts From Proposed Spectrum Auctions (July 2011). the sale of spectrum licenses, and (3) relocation costs. We discussed factors affecting spectrum auction revenue with CBO and OMB officials, industry and policy experts, and obtained input from CTIA—The Wireless Association, the association representing the wireless industry. We conducted this performance audit from September 2012 to May 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient appropriate evidence and provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Michael Clements, Assistant Director; Stephen Brown; Jonathan Carver; Leia Dickerson; Jennifer Echard; Emile Ettedgui; Colin Fallon; Bert Japikse; Elke Kolodinski; Joshua Ormond; Jay Tallon; and Elizabeth Wood made key contributions to this report.
Allocating radio-frequency spectrum is a challenging task because of competing commercial and government demands. In 2006, FCC auctioned spectrum licenses in the 1710-1755 MHz band that had previously been allocated for federal use. To meet the continued demand for commercial wireless services, NTIA assessed the viability of reallocating the 1755-1850 MHz band to commercial use; this band is currently assigned to more than 20 federal users, including DOD. In March 2012, NTIA reported that it would cost $18 billion over 10 years to relocate most federal operations from the band, raising questions about whether relocating federal users is a sustainable approach. GAO was directed to review the costs to relocate federal spectrum users and revenues from spectrum auctions. This report addresses (1) estimated and actual relocation costs, and revenue from the previously auctioned 1710-1755 MHz band; (2) the extent to which DOD followed best practices to prepare its preliminary cost estimate for vacating the 1755-1850 MHz band; and (3) existing government or industry forecasts for revenue from an auction of the 1755-1850 MHz band. GAO reviewed relevant reports; interviewed DOD, FCC, NTIA, and OMB officials and industry stakeholders; and analyzed the extent to which DOD's preliminary cost estimate met best practices as identified in GAO's Cost Estimating and Assessment Guide (Cost Guide). FCC agreed with the report's findings and DOD, FCC, and NTIA provided technical comments that were incorporated as appropriate. Some federal agencies underestimated the costs to relocate communication systems from the 1710-1755 megahertz (MHz) band, although auction revenues appear to exceed relocation costs by over $5 billion. As of March 2013, actual relocation costs have exceeded estimated costs by about $474 million, or 47 percent. The National Telecommunications and Information Administration (NTIA) expects agencies to complete the relocation effort between 2013 and 2017, with a final relocation cost of about $1.5 billion. Actual relocation costs have exceeded estimated costs for various reasons, including unforeseen challenges and some agencies not following NTIA's guidance for developing cost estimates. However, the Department of Defense (DOD) expects to complete its relocation for about $71 million less than its estimate of about $355 million. NTIA and the Office of Management and Budget (OMB) are taking steps to ensure that agencies improve their cost estimates by, for example, preparing a cost estimation template and guidelines for reporting reimbursable costs. The auction of spectrum licenses in the 1710-1755 MHz band raised almost $6.9 billion. DOD's preliminary cost estimate for relocating systems out of the 1755-1850 MHz band substantially or partially met GAO's best practices for cost estimates, but changes in key assumptions may affect future costs. Adherence with GAO's Cost Guide reduces the risk of cost overruns and missed deadlines. GAO found that DOD's preliminary estimate of $12.6 billion substantially met the comprehensive and well-documented best practices. For instance, it included complete information about systems' life cycles, and the baseline data were consistent with the estimate. However, GAO found that some information on the tasks required to relocate some systems was incomplete. GAO also determined that DOD's estimate partially met the accurate and credible best practices. For example, DOD applied appropriate inflation rates and made no apparent calculation errors. However, DOD did not complete some sensitivity analyses and risk assessments at the program level, and not at all at the summary level. DOD officials said that changes to key assumptions could substantially change relocation costs. Most importantly, decisions about which spectrum band DOD would relocate to are still unresolved, and relocation costs vary depending on the proximity to the 1755-1850 MHz band. Nevertheless, DOD's preliminary cost estimate was consistent with its purpose--informing the decision-making process to make additional spectrum available for commercial wireless services. No government revenue forecast has been prepared for a potential auction of the 1755-1850 MHz band, and a variety of factors could influence auction revenues. One private sector study in 2011 forecasted $19.4 billion in auction revenue for the band, assuming that federal users would be cleared and the nationwide spectrum price from a previous auction, adjusted for inflation, would apply to this spectrum. Like for all goods, the price of spectrum, and ultimately the auction revenue, is determined by supply and demand. The Federal Communications Commission (FCC) and NTIA jointly influence the amount of spectrum allocated to federal and nonfederal users (the supply). The potential profitability of a spectrum license influences its demand. Several factors would influence profitability and demand, including whether the spectrum is cleared of federal users or must be shared.
To support the educational needs of children with disabilities, Congress originally enacted IDEA in 1975. Part B of IDEA authorizes federal funding for children aged 3 through 21 with a range of disabilities who need special education services. To receive federal funds, states and local education agencies must identify and evaluate children who have disabilities and provide special education and related services, as well as supplementary aids and services when necessary, to those who are eligible. Such services and supports are formulated in an IEP, which is In the developed, discussed, and documented by a student’s IEP team.2004 reauthorization of IDEA, Congress required that, beginning no later than age 16, a student’s IEP must include measurable postsecondary goals related to training, education, employment, and where appropriate, independent living skills. The IEP also must specify the transition services needed to assist the student in reaching those goals. School officials are required to invite the student to a meeting where the transition services detailed in the IEP are discussed. When appropriate, they also must invite a representative of any participating outside agency (with the prior consent of the parent or student who has reached the age of majority). As students with disabilities exit high school, they may apply as adults and be found eligible for a number of federally funded programs, including federal disability programs, if they wish to obtain services important to their transition. There is wide diversity in this population—students with disabilities can have a range of physical and cognitive disabilities that can affect their ability to learn. They may also demonstrate varying levels of academic aptitude and achievement in different areas. Thus, the number of programs for which each student may be eligible can vary widely based on their abilities, postsecondary goals, and the types of supportive services they may need to be successful. We identified a range of programs that provide services to support students with disabilities in their transition out of high school. These programs vary in the target population served, services provided, grant funding amounts, and other characteristics. In addition, they are authorized by multiple federal laws (administered through various federal agencies), each with its own eligibility requirements and application processes. (See fig. 1). Moreover, federally funded programs that provide transition services, as defined in this report, are often delivered through state and local entities that have flexibility on how to administer services. The following four agencies have primary responsibility for administering federal programs that can provide services to transition-age youth with disabilities: Education’s Rehabilitation Services Administration awards funds to state vocational rehabilitation (VR) agencies in the form of matching grants to help individuals with disabilities prepare for and engage in gainful employment. VR programs require that an individualized plan for employment be developed for eligible students before they leave high school. Furthermore, if the student is receiving special education services, this plan must be coordinated with the student’s IEP in terms of goals, objectives, and services. Labor oversees the one-stop center system, a comprehensive workforce investment system created under the Workforce Investment Act of 1998 (WIA) that brings together multiple federally funded employment and training programs that can help all eligible individuals seeking employment and training—including students with disabilities. Labor also administers the Disability Employment Initiative, which is designed to improve educational, training, and employment opportunities and outcomes for youth and adults with disabilities who are unemployed, underemployed, and/or receiving Social Security disability benefits. SSA provides cash benefits to qualifying individuals with disabilities— including transition-age young adults—through its Disability Insurance and Supplemental Security Income (SSI) programs. SSA also administers the Ticket to Work program, which is designed to enable individuals with disabilities (who are receiving disability insurance or SSI benefits and are between the ages of 18 and 64) to obtain services needed to find, enter, and retain employment. They obtain these services from providers such as VR agencies. HHS’s Centers for Medicare & Medicaid Services manages Medicaid, the joint federal-state health care financing program for qualifying low- income individuals. Within the Medicaid program, states provide home and community-based services to individuals with certain types of disabilities—which may include young adults—who might otherwise be cared for in institutional settings. Because Medicaid usually does not cover home and community-based services, states must obtain a waiver to provide these services. Services provided in accordance with these waivers vary by state, are individualized, and may include, for example, case management, personal care attendants, or day or residential habilitation. In addition, these and other federal agencies fund a number of other programs through grants to states, localities, and nongovernmental organizations that may assist students and young adults during their transition from high school. Some of these grants explicitly target improving postsecondary outcomes for students with disabilities and others provide a range of support services such as assistive technology, information and referral, advocacy, transportation, leadership development, benefits counseling, and independent living services. (See app. II for more information on federal programs that received federal funding in FY 2011 to provide transition services to students with disabilities.) Students with disabilities face several challenges accessing federally funded programs that can provide transition services as they leave high school for postsecondary education or the workforce. These include difficulty navigating multiple programs that are not always coordinated; possible delays in service as they wait to be served by adult programs; limited access to transition services; a lack of adequate information or awareness on the part of parents, students, and service providers of available programs that may provide transition services after high school; and a lack of preparedness for postsecondary education or employment. Prior GAO work identified many of these same challenges, which is indicative of the longstanding and persistent nature of the challenges facing students with disabilities as they transition out of high school. In each of the five states we contacted, state officials said it can be difficult for students with disabilities and their families to navigate the multiple federal programs that provide transition services. Some officials said that the shift from being automatically entitled to services under IDEA if identified as disabled while in high school to having to apply as adults and be found eligible for multiple programs after exiting high school is difficult for students and their parents to understand. (See fig. 2). Many of the stakeholders told us that a lack of coordination between programs was another key challenge for students with disabilities and/or their families. For example, staff from a parent training and information Center in Minnesota said that it is very challenging for parents to navigate the system and coordinate resources for their children across programs. In their experience, none of the program officials coordinate with those from other programs to share information on clients. State officials suggested that a lack of coordination between programs often arises as early as during IEP transition planning meetings. IDEA requires high schools to invite, with parental or student consent, representatives from adult programs likely to be responsible for providing or paying for transition services to the student after high school, such as VR, to these meetings to the extent appropriate.not required to attend, and we heard that they are often not at the table for transition planning meetings. VR officials from one state acknowledged this, saying that while they try to attend transition planning meetings, it is not always possible because of resource and time constraints. Some of the stakeholders suggested that without the commitment of local leaders and service providers to coordinate services between high school and adult programs, there is little to no communication between programs, which can create difficulty for families trying to navigate across different programs. These representatives, however, are In each of the five states we contacted, some officials said that differing requirements for adult programs can confuse students and parents. For example, officials from Florida’s department of VR said that the requirement for VR clients to have an individualized plan for employment that identifies an employment goal and the services and supports necessary to achieve that goal can be confusing for youth who already included transition plans and identified a career goal in their IEP. In addition, the amount of documentation each program requires can be overwhelming for students with disabilities and their parents. According to a student in Maryland, there is a continuous administrative burden on applicants to provide the same or similar information to multiple programs. Officials we interviewed from three of the four federal agencies acknowledged these challenges. In each of the states we contacted, officials suggested that it would be helpful to appoint a case manager to coordinate services and guide students and their families through the transition process. Some of the parents and also officials from two of the four federal agencies agreed that a case manager could help students with disabilities and their families navigate across the multiple programs. However, officials from one federal agency cautioned that it could be costly and, given that programs that provide transition services are administered by different federal agencies and implemented at the state and local level, challenging to administer. Students with disabilities may also face delays in service upon leaving high school as they wait to obtain services from adult programs or for their eligibility determinations to be finalized. Many stakeholders said that delays in service can be caused by limited financial or program resources, which may leave youth with disabilities on waitlists for services. In particular, states may have waitlists—sometimes with several thousand individuals—for home and community-based waiver services. The departments of VR in four of the five states we contacted were operating under a federally required order of selection, requiring them to serve individuals with the most significant disabilities before serving others. Several parents from Minnesota said that their children had been on waitlists for waiver services or VR services for years. One parent from Florida said that her adult son was living at home with no services or employment options as he waited for waiver services from the state’s department of disability. Officials from Nevada’s department of VR said that delays in service may also occur when students with disabilities, upon leaving high school, must return assistive technology devices on loan from the school, such as software for blind individuals that reads text on a screen in a computer-generated voice. According to officials, some students go without these critical adaptive devices until VR is able to equip them with the same or similar technologies. Service delays can be exacerbated if students with disabilities have to wait until program officials resolve who should provide and pay for services. In addition, some adult programs will not provide services to students who are still eligible to receive services under IDEA. Officials from two states said that, as a result, there has been a shift toward keeping students with disabilities in high school longer so that schools continue paying for services until students graduate or turn 22 years of age. For example, officials from Maryland’s department of VR said that students with developmental disabilities who decide to leave high school before they age out of IDEA services often face a delay in services because the state department of developmental disability will not provide services to students younger than age 21. Some of the stakeholders said that differing eligibility criteria, definitions of disability, and assessment requirements for the various adult programs can also result in service delays while youth with disabilities wait for assessments or eligibility determinations. For example, officials in the four states in which we spoke with higher education officials said some colleges require students with disabilities to be reassessed before they can receive accommodations, and that this can cause a delay in service because there are long waitlists for these reassessments or because they are cost prohibitive for some families. Limited access to reliable public transportation to and from employment programs and service providers—especially in rural areas—was also frequently highlighted as a major challenge. For example, officials from Florida said limited funding for transportation services contributes to the lack of transportation for students with disabilities. Officials in each of the states we contacted also said that certain groups of students with disabilities are more likely to face limited service options or gaps in service because their disabilities may be less visible or because they are less likely to qualify for adult programs. These groups include students with developmental or cognitive disabilities, learning disabilities, mental health disabilities, autism, and mild disabilities. Further, we heard that there may be limited programs for students with hearing or visual impairments, and that if these students also have other disabilities, it can be difficult to determine which program (e.g., VR or a developmental disability agency) should provide services, which can lead to gaps in service. Similarly, officials said that students with disabilities who are in the juvenile justice system, are themselves parents, or are homeless may also be more likely to face gaps in service than other students with disabilities because they tend not to be aware of or connected to adult service providers. In addition, some students who qualified for services under IDEA and/or under Section 504 of the Rehabilitation Act may not meet the eligibility requirements for adult programs and may, therefore, have limited or no post-high school service options. For example, one parent told us that her daughter, who has a serious physical disability, did not receive any transition planning assistance and struggled to gain access to services such as personal care attendants who would help her successfully transition to a college out of state. A lack of adequate information and awareness of available program options on the part of parents, students, and service providers was another challenge highlighted during our site visits. Many stakeholders said that students with disabilities and their parents do not always receive enough information about the full range of service options after high school. For example, a parent from California said that she was very disappointed with the limited information she received from her school district and that she had no idea what resources were available for her son after he left high school. A student from Maryland expressed concern that students with disabilities who do not seek information about transition services outside of high school may not have access to information, and consequently, to needed services. In contrast, a few stakeholders said parents may receive too much information and feel overwhelmed. For example, a parent from California said that families may receive so much information that they do not remember everything and do not know where to seek help when the time comes. A staff member from the California Department of Education’s Workability program said that, even when information about transition services is available, it is generally not compiled and made available in one central place for families to access.She recommended that states or programs develop an accessible, easy to read transition manual that clearly lays out post-high school service options. Sometimes there was an issue with the accuracy of information parents received. For example, officials in three of the five states we contacted said that parents may be misinformed about programs, especially about the ability of their children to retain SSI benefits. Officials from Florida’s developmental disability agency noted that parents are often misinformed by teachers or adult program service providers that their children will lose these benefits entirely if they obtain any paid employment. Lack of awareness of service options also extended to teachers and other high school personnel. Many of the stakeholders said that teachers and other high school personnel may not always be aware of post-high school service options for students with disabilities. For example, one parent said that while there are a lot of programs in her community that can aid students in transition, school personnel are not aware of them and therefore cannot appropriately guide students with disabilities and their families. Moreover, some experts and state education officials said that teacher training and professional development programs do not always adequately prepare teachers to provide transition services or inform them about the various agencies and resources available to students with disabilities. A few of the officials, however, said that teachers in some school districts are well trained in and aware of adult programs that can provide transition services, which allows them to disseminate information to students and their parents. In addition, some stakeholders said that service providers from adult programs may not be used to working with this student population or have limited awareness of other adult programs that can provide complementary transition services. For example, stakeholders in Maryland and Nevada said that VR counselors need additional training to work with transition-age youth with disabilities and officials from Maryland’s local workforce agencies said that one-stop center staff need more training to help these students enter the workforce. A representative from a parent training and information center in Maryland added that the knowledge service providers have about other programs is piecemeal and inconsistent. She suggested the federal government support additional training for all professionals who work with students in transition. Many stakeholders said that high schools do not always adequately prepare students with disabilities for college or the workforce, and cited several contributing factors. According to some officials, the federal requirement to begin transition planning by age 16 is too late. In fact, officials in four of the five states we contacted said that schools are required to start transition planning at an earlier age. In addition, in all five states we heard that schools’ emphasis on academic achievement has left little time for vocational and life skills training, even though these skills may be key to gaining and retaining employment—especially for students with disabilities. Officials from Minnesota’s department of VR said that schools need to pay greater attention to vocational training because students with disabilities are at a distinct disadvantage if they leave high school with no work experience. Further, officials from Maryland’s department of developmental disabilities said that because most jobs require a high school diploma, students with disabilities who receive certificates instead of diplomas could find their employment options significantly curtailed because many employers do not recognize alternative completion documents. As a transition specialist from Maryland noted, many students with non-traditional diplomas end up in sheltered workshops because they are not considered to be qualified for competitive employment opportunities. In addition, according to some stakeholders, adult programs are not always designed to meet the needs of transition-age youth with disabilities in ways that will help them succeed in college or in a job. For example, a few state officials said that the VR system does not provide incentives for serving transition-age youth with disabilities because VR’s performance indicators reward counselors for serving clients who find and maintain employment for at least 90 days, and youth with disabilities may take longer to do so. representatives from California’s workforce agency, that the time frame of the employment outcome measures under the WIA youth program may be too short—for example, the employment retention rate at 6 months— and not appropriate for transition-age youth with disabilities who often require follow-up support longer than 6 months in order to be successful at a job. We previously reported that Education does not comprehensively measure the performance of VR for certain key populations, including transition-age youth. See GAO, Vocational Rehabilitation: Better Measures and Monitoring Could Improve the Performance of the VR Program, GAO-05-865 (Washington, D.C.: Sept. 23, 2005). social security benefits instead of receiving job training, and that students with more serious disabilities who could benefit from competitive employment (i.e., applying for and getting a job) may be steered instead toward adult day training programs and sheltered workshops. Education, HHS, Labor, and SSA coordinate transition activities to some degree, but their coordination has limitations and they do not assess the effectiveness of their efforts. They coordinate on some specific transition activities, but their efforts are primarily focused on information sharing and lack elements that our prior work identified as enhancing and sustaining effective coordination. We have reported on the importance of developing common outcome goals and of engaging in strategic planning and coordination to address issues that cut across agency boundaries. This can take many forms, ranging from occasional meetings between agency staff to more structured joint policy teams operating over a long period of time. One federal coordination effort—the Federal Partners in Transition Workgroup—targets transition services to students with disabilities and involves all four agencies that administer the key programs that provide transition services to youth with disabilities. However, this workgroup is informal and primarily involves information sharing among staff-level representatives, according to agency officials. For example, SSA officials told us that in past meetings, their staff presented information about SSI requirements for the transitioning youth population, including the process for redetermining eligibility for SSI when youth turn age 18, and information on the Student Earned Income Exclusion. To a lesser extent, some workgroup members also reported that they have jointly developed guidance for students with disabilities and grantees, including a fact sheet about how students can take advantage of Schedule A hiring authority for federal jobs. In addition, the workgroup has convened forums to help students with disabilities develop their leadership and self- advocacy skills and to discuss action steps to ensure students are prepared to move successfully to adulthood. This workgroup also convened a meeting of representatives of technical assistance centers to discuss coordination among the centers. Agencies involved in the workgroup reported varying levels of involvement in more extensive coordination activities, such as policymaking, program planning, and joint strategic planning. Labor officials leading the effort told us they are in the process of drafting a strategic plan to identify objectives, activities, and outcomes for the group. Education and Labor also participate in the National Community of Practice in Support of Transition, which was developed by the IDEA Partnership and focuses on joint efforts among state and local agencies to coordinate and improve outcomes for youth with disabilities in transition. Both agencies also have established intra-agency groups to (See fig. 3.) facilitate collaboration between internal program offices. Education officials also said they recently sponsored a national transition conference for more than 800 professionals, families, and students to facilitate collaboration and communication across federal, state, and local entities. Aside from these efforts, officials said most of their interagency coordination regarding transition services occurs on an ad hoc basis, such as sharing white papers and holding informal discussions about policies, performance measures, and technical assistance to states. In addition, several federal coordination efforts broadly target disadvantaged youth or all individuals with disabilities and may address some aspects of transition. (See app. III). Some federal agencies are involved in new demonstration projects that plan to address coordination across systems at the state and local level. For example, an official from HHS stated that the agency has coordinated with Education and Labor to develop grants under the new Projects of National Significance Partnerships in Employment Systems Change. This initiative will provide resources for state agencies and service providers to collaborate with other services systems to develop statewide model demonstration projects that expand competitive employment for youth with developmental disabilities. In another example, officials at all four agencies said they have been involved in early discussions regarding implementation of the new Promoting Readiness of Minors in Supplemental Security Income (PROMISE) initiative, which will fund pilot projects in states to promote positive changes in the outcomes of youth SSI recipients and their families. Education officials said they are in the process of holding meetings to gather input on potential projects from federal partners and stakeholders, including state agency officials, service providers, researchers, policy experts, and families. As part of the initiative, Education and SSA officials said they will work collaboratively to identify legislative barriers to competitive employment and ways to improve coordination at the state level. In addition to collaborative efforts across agencies, Education officials said that six grants focusing on transition and funded by their Rehabilitation Services Administration are in their fifth and final year of operation. According to Education officials, these grants demonstrate the use of promising practices of collaborative transition planning and service delivery to improve the postsecondary education and employment outcomes of youth with disabilities. Despite these efforts, federal agency officials identified several barriers that limit their ability to coordinate. We have reported that federal agencies face a range of coordination barriers, one of which stems from goals that are not mutually reinforcing or are potentially conflicting, making it difficult to reach a consensus on strategies and priorities. found interagency coordination is enhanced by having a clear and compelling rationale for staff to work across agency lines and articulate the common federal outcomes they are seeking. Indeed, officials identified a lack of compatible outcome goals for transitioning students with disabilities as one of the key barriers that hinder their coordination efforts. Mutually reinforcing goals or strategies are designed to help align agency activities, core processes, and resources to achieve common outcomes. countered by requirements for students to prove that their disabilities limit their ability to work in order to receive SSI benefits. Similarly, officials told us that, in early interagency discussions regarding the PROMISE initiative, special education officials focused on students’ access to postsecondary education, while VR and SSA officials were more concerned about students’ earnings. Officials from all four agencies said that aligning outcome goals for transition-age students with disabilities would enhance interagency coordination and help agencies approach transition in a more integrated way. Some officials suggested establishing a common agreement on desired outcomes for transitioning students, such as economic self-sufficiency or engagement in meaningful employment, volunteer work, or postsecondary education by a certain age. The age range for children served through special education under IDEA is 3 through 21. 20 U.S.C. § 1412(a)(1)(B). SSI serves children from birth to age 18, (42 U.S.C. § 1382c(c)), at which point there must be a redetermination as to whether or not they are still eligible for SSI benefits as adults (42 U.S.C. § 1382c(a)(3)(H)(iii)). programs.services to recipients who might benefit from them. Moreover, integrating information about students served by multiple programs over time would allow agencies to assess the impact of transition services across programs, according to Education and SSA officials. In addition, officials said sharing information about common service recipients would help agencies serve students with disabilities in a more streamlined way. For instance, SSA could identify students receiving employment and training services through other federal programs and provide counseling to help them understand how paid employment affects their SSI benefits and health insurance, with an eye toward helping students attain greater economic self-sufficiency. Officials cautioned, however, that privacy concerns may limit some information sharing and make it difficult to integrate information from multiple systems. While officials noted that the Federal Partners in Transition Workgroup has discussed these information sharing challenges at some of its meetings, one official noted that there is no substantive effort to address them at the federal level. As a result, agencies are limited in their ability to target Officials also identified a lack of clarity on agencies’ roles and responsibilities for providing and paying for transition services as another coordination barrier. For example, each program has its own statutory authority, permitting it to pay only for certain services or types of services. This can create confusion, particularly at the state and local levels, about who is responsible for paying for a particular service. It can also result in frequent debates about which agency is responsible for funding services, according to some officials, creating a disincentive for agencies to work together. While certain state agencies such as educational agencies and VR agencies are required to articulate roles and responsibilities in interagency agreements,authorizing statute should clearly define agency responsibilities to help Education officials suggested that a program’s avoid confusion and minimize potential delays and disruptions in delivering transition services. Although federal agencies are engaged in some coordination efforts, these efforts represent a patchwork approach and officials at all four agencies indicated there is no single, formal, government-wide strategy for coordinating transition services. While such a strategy is not required, we have previously cited the need for an overall federal strategy and government-wide coordination to align policies, services, and supports among the various disability programs, which include supports for transition-age students. Agency officials acknowledged that coordination specifically on transition services could be improved. For example, one official said agencies could work collaboratively to identify opportunities to address legislative and regulatory barriers to coordinating transition services. Officials added that improved data collection and sharing could help agencies adopt a more coordinated and crosscutting approach to delivering transition services to students with disabilities. Labor officials leading the Federal Partners in Transition Workgroup said that, while an overall plan for transition remains beyond the group’s scope of work, a framework that identifies what is needed for a successful transition could be used at the federal level to review collaboration across systems and to identify definition, service, and funding gaps.also be used at the local level to identify gaps in communities and individual plans. It is unclear whether existing federal coordination efforts have had a positive effect on access to transition services because agencies do not assess their coordination efforts. We have reported that developing mechanisms to monitor, evaluate, and report on the results of their coordination efforts can help key decision makers within agencies, as well as clients and stakeholders, obtain feedback for improving both policy and operational effectiveness. For example, coordinating agencies could require members with lead responsibilities for a focus area to report on their progress in achieving defined objectives. Federal officials said that coordination has helped improve relationships and communication across agencies administering transition services, yielding an increased understanding of each other’s research, policy, and evidence-based practices as a result of their involvement in interagency efforts, including the Federal Partners in Transition Workgroup. Agency officials also told us that some coordination efforts have led to increased engagement in transition policy by students with disabilities and their families and improved results in achieving career readiness and self-sufficiency. However, these results are difficult to corroborate because agencies do not evaluate the impact of their efforts, and in many cases do not track coordination outcomes at the federal level, according to agency officials.Furthermore, the effectiveness of existing federal coordination efforts is questionable, as evidenced by persistent challenges students with disabilities face navigating multiple programs. Some federal agencies monitor compliance with requirements for grantees to coordinate with other state and local entities under individual programs. For example, Part B of IDEA requires state educational agencies to report annually on their performance using 20 indicators established by the Secretary. One of the indicators measures the state’s compliance with the requirement under IDEA to include postsecondary goals and transition services in the IEPs of students age 16 and above; and to invite the student and, if appropriate, representatives from other participating state agencies to the student’s IEP team meetings if transition services are to be discussed. Similarly, state VR agencies must report annually to the Rehabilitation Services Administration on whether they have identified the responsibilities of other agencies through statute, regulation, or written agreements, and to undergo monitoring of their coordination activities. These monitoring reviews, however, mainly address compliance with programmatic and fiscal requirements, help ascertain whether state agencies have in place signed formal interagency agreements, and check whether these agreements include key components such as providing technical assistance to school districts on transition planning. Agency officials noted that there are no quantifiable measures to assess how effectively transition services are coordinated, and that any assessment is typically based on observation and a review of practices and procedures rather than on data. The current federal approach to assisting students with disabilities in their transition to postsecondary education or the workforce necessitates that students and their parents navigate multiple programs and service systems to piece together the supports these students need to achieve maximum independence in adulthood. Under this complex structure, information dissemination and service coordination are essential. Without receiving accurate and timely information about available services, students may miss opportunities to access needed services that could mean the difference between achieving an optimal level of self-sufficiency and relying on public assistance to meet their basic needs. While officials report that federal agency coordination efforts, such as the Federal Partners in Transition Workgroup, have improved relationships and built shared knowledge across participating agencies, they have yet to adopt a broader interagency strategic approach to addressing longstanding challenges in providing transition services to students with disabilities. The transition workgroup, in particular, represents a unique vehicle that could provide leadership in developing such a strategy specifically focused on students with disabilities who are transitioning out of high school. Given the multiple agencies involved in supporting this population, in conjunction with multiple eligibility criteria and definitions established in statute, the lack of such a strategy is a missed opportunity to break down coordination barriers and work across agency boundaries. Only then can agencies systemically address persistent transition challenges and improve outcomes for students with disabilities. Furthermore, without assessing the effectiveness of federal coordination efforts, agencies are unable to determine what works well, what needs improvement, and where best to direct increasingly constrained federal resources. To improve the provision of transition services to students with disabilities through enhanced coordination among the multiple federal programs that support this population, we recommend that the Secretaries of Education, HHS, and Labor, and the Commissioner of SSA direct the appropriate program offices to work collaboratively to develop a federal interagency transition strategy. This strategy should address: 1. compatible policies, procedures, and other means to operate across agency boundaries towards common outcomes for transitioning youth and their families; 2. methods to increase awareness among students, families, high school teachers, and other service providers on the range of available transition services; and 3. ways to assess the effectiveness of federal coordination efforts in providing transition services. To the extent that legislative changes are needed to facilitate the implementation of this transition strategy, agencies should identify and communicate them to the Congress. We provided a draft of this report to officials at the Departments of Education, HHS, and Labor, and to SSA for their review and comment. Their responses are reprinted, respectively, in appendixes IV, V, VI, and VII of this report. They also provided technical comments, which we incorporated as appropriate. In their comments, all four agencies agreed with our recommendation and noted that they have been or will be in contact with each other to expedite preliminary discussions on an implementation strategy. Some of the agencies also described coordination efforts beyond those mentioned in our draft report. Specifically, Education said it is currently engaged in numerous transition coordinating activities with HHS, Labor, and SSA related to discretionary grants, legislative proposals, draft regulations, policy positions, and program improvements. Education highlighted the National Transition Conference it hosted in May 2012, explaining that the four agencies worked together to plan and participate in all stages of the conference with the goals of raising awareness of services, sharing promising practices, and creating an action agenda to improve transition outcomes for youth with disabilities. HHS noted that it funds the Consortium to Enhance Postsecondary Education for Individuals with Developmental Disabilities. This consortium conducts research, provides training and technical assistance, and disseminates information on promising practices that support individuals with developmental disabilities to increase their independence, productivity and inclusion through access to postsecondary education. Since 2010, HHS has also collaborated with Education and Labor on Project SEARCH, a program to support local students with disabilities in their last year of high school to experience work opportunities within these federal agencies. Labor stated that it plans to reach out to Education and SSA to explore ways to formalize its Federal Partners in Transition Workgroup. This group will work to help align policies, services, and supports provided by various programs to transition-age youth with disabilities, and to help identify legislative and regulatory barriers that prevent the coordination of transition services. Moreover, this group would assess the impact of its coordination efforts by developing common outcome goals. Finally, HHS noted that the Developmental Disabilities Assistance and Bill of Rights Act of 2000 does not provide for direct transition services. In response, we clarified, in figure 1, that the act provides funding for activities that support employment and training for youth with disabilities. HHS also questioned the relevance of several programs included in our list of federal programs that provide transition services, on the basis that the programs do not provide direct services. We agree that one of these programs, Partnership in Employment Systems Change Grants, is intended to enhance collaboration rather than provide transition services; therefore, we removed it from the list. However, we disagreed that the Youth Information, Training and Resources Centers program be omitted from the list. It provides self-advocacy services that we consider to be a type of transition service for youth. Similarly, we disagreed that Developmental Disabilities Protection and Advocacy should be omitted from the list. This program provides information on transition services and supports to youth, among other things. Consequently, both programs are still included. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretaries of Education, HHS, and Labor, as well as the Commissioner of SSA, and other interested parties. In addition, the report will be available at no charge on our website at: http://gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-7215 or moranr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VIII. Our review examined the (1) challenges students with disabilities may face accessing federally funded transition services; and (2) extent to which federal agencies coordinate their transition activities. To determine the challenges students with disabilities may face accessing transition services as they leave high school for postsecondary education or the workforce, we selected a nongeneralizable sample of five states and interviewed state and local officials responsible for administering the key federal programs that provide transition services. We visited four states: California, Florida, Maryland, and Minnesota, and interviewed officials in Nevada by phone. In the four states we visited, we also met with groups of parents and students with disabilities to discuss the challenges they face. In addition, we met with a number of experts in the field of transition and with associations representing young adults with a wide range of disability types to obtain their perspectives on challenges students face during transition. Finally, we reviewed the definitions of disability and the eligibility criteria in selected federal statutes that govern relevant federal programs providing transition services to identify any potential legislative or regulatory challenges they may pose. To assess the extent to which the four key federal agencies that administer programs providing transition services—the Departments of Education (Education), Health and Human Services (HHS), and Labor (Labor), and the Social Security Administration (SSA)—coordinate their transition activities, we interviewed agency officials, obtained their written responses to questions about their coordination efforts, and reviewed agency documents. We analyzed this information based on GAO criteria detailing activities that can enhance and sustain collaboration among federal agencies. We conducted this performance audit from July 2011 to July 2012 in accordance with generally accepted governmental auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We selected the five states in our nongeneralizable sample based on the number of grants each state received under key federal programs that provide transition services, recommendations from agency officials and experts, and geographic diversity, to the extent possible. To identify these key federal programs that provide grants to states and localities for transition services, we searched the Catalog of Federal Domestic Assistance (CFDA) and asked relevant agency officials to verify this list of programs and identify any programs that were not captured in our search results. Based on this search, we identified six federal grant programs that had a specific focus on improving transition services, and we looked at the distribution of grants to select states that received a relatively high number of federal grants for transition services. We also asked agency officials and experts for their recommendations of states with model programs or promising practices related to transition services and/or state-level collaborative efforts to improve transition outcomes. We did not do an independent legal analysis to verify program information from the CFDA or agency officials. To identify what additional challenges, if any, students may face in states with relatively few programs that provide transition services, we also selected one state with relatively few federal grant programs to determine if the key challenges identified were similar to those in other states. In each state we visited, we met with officials from state departments of education or special education, higher education, vocational rehabilitation, developmental disabilities, workforce agencies, and staff from parent training and information centers. In addition, with the exception of Nevada, staff from parent training and information centers in each state assisted us by organizing discussion groups with parents and students with disabilities that were in the process of planning their transition from high school to postsecondary education or employment or had recently made the transition out of high school. In a few states, we also met with officials from centers for independent living, other nongovernmental organizations that received federal grants to provide transition services, and transition specialists and experts. See table 1 for a complete list of the organizations and groups we interviewed. During our interviews, we discussed challenges students with disabilities may face—including legislative or administrative barriers, potential gaps in transition services, knowledge of teachers and other service providers about transition services and options, parent and student awareness of available transition services and options, and coordination among federal agencies providing transition services. Finally, we asked officials from the relevant Education, HHS, Labor, and SSA program offices for their perspectives on the challenges faced by transitioning students with disabilities. To supplement the information collected during our interviews, we reviewed written responses and documents provided by officials from state and local organizations; reviewed selected statutory language related to some of the main legislative challenges identified by federal, state and local officials; and conducted a limited literature review of recent research related to transition challenges. To evaluate the extent to which federal agencies coordinate their transition activities, we asked officials from Education, HHS, Labor, and SSA to complete a data collection instrument we developed that requested information on their coordination efforts and activities relating to transition services. We reviewed agency officials’ written responses to determine whether their efforts were formal or informal, targeted towards transitioning students with disabilities, which agencies were involved, and which specific activities were coordinated. We also interviewed agency officials from relevant program offices at each agency to obtain additional information about ongoing coordination efforts related to transition services. These interviews also addressed inter- and intra-agency coordination efforts related to transitioning students with disabilities, examples of successful outcomes from these coordination efforts, any agency assessments of their coordination efforts, and potential barriers to coordination. In addition, we reviewed and analyzed available documents from each agency, including their strategic plans, performance reports, and agency performance measures; program websites and descriptions; and other relevant agency documents, such as joint technical guidance. We assessed the extent of the agency’s coordination efforts based on GAO’s criteria for practices agencies can use to help enhance and sustain interagency collaboration. To provide an overview of federal programs that provide transition services to youth with disabilities, we identified 21 such programs administered by five federal agencies: Education, HHS, the Department of Justice, Labor, and SSA. first searched the CFDA using key subject terms related to transition services for students with disabilities.list of programs that was reviewed independently by two analysts. Each analyst reviewed the program descriptions in CFDA and from the relevant program websites, as necessary, and independently determined whether a program should be excluded due to clear lack of relevance to transition services for students with disabilities. The analysts then compared and discussed their decisions to further refine the list of programs. (See app. II). To identify these programs, we This search produced a preliminary From this second list, we selected programs that met the following criteria: they (1) exclusively serve individuals with disabilities, including students of transition-age (age 14 to 25); (2) provide transition services directly to youth going from high school to postsecondary education or the workforce and/or services to their families; and (3) received federal funding in fiscal year 2011. The 21 programs included in this appendix met the specific selection criteria described in this appendix. In contrast, the programs described in the background section of this report are examples of broader programs administered by Education, HHS, Labor, and SSA that support transition-age students with disabilities, although they may not directly provide transition services. any programs meeting our selection criteria that were not included in our search results, and provide additional information on the selected programs. We followed up with agency officials through teleconferences and email, as necessary, to clarify program information and make a decision to include or exclude programs. We reviewed agency documentation and selected laws and regulations to verify eligibility criteria, including definitions of disability and funding information. To assess the reliability of recipient data reported in our tables, we reviewed agency officials’ responses to questions regarding how they collected the data, any potential limitations of the data, and the databases and systems used to maintain the information on program recipients. To assess the reliability of funding data, we reviewed publicly available and agency- provided budget documents. In cases where funding amounts for specific programs were not separately reported, we clarified the information with agency officials and noted that data were reported by the agency. Based on our review of agency officials’ responses to our questions and of budget documentation, we determined that the recipient and funding data we reported were sufficiently reliable to include in this report. Tables 2 to 6 of this appendix contain information on various federal programs that provide transition services to youth with disabilities. Some of the coordination efforts of the Departments of Education (Education), Health and Human Services (HHS), Labor (Labor), and the Social Security Administration (SSA) broadly address youth or individuals with disabilities (see fig. 4). A focus on transition-age students with disabilities may or may not be explicitly included in these federal coordination efforts, but agency officials indicated that all of these efforts include discussions of programs or policy that impact this population in some manner. In addition to the contact named above, Meeta Engle (Assistant Director), Nora Boretti (Analyst-in-Charge), Rachel Batkins, Brenna Guarneros, and Jennifer McDonald made significant contributions to this report. In addition, assistance, expertise, and guidance were provided by Susan Anthony, James Bennett, Amy Buck, Susannah Compton, Elizabeth Curda, Jill Lacey, Kathy Leslie, Craig Winslow, and Carolyn Yocom. High Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Postsecondary Education: Many States Collect Graduates’ Employment Information, but Clearer Guidance on Student Privacy Requirements Is Needed. GAO-10-927. Washington, D.C.: September 27, 2010. Higher Education and Disability: Education Needs a Coordinated Approach to Improve Its Assistance to Schools in Supporting Students. GAO-10-33. Washington, D.C.: October 28, 2009. Young Adults with Serious Mental Illness: Some States and Federal Agencies Are Taking Steps to Address Their Transition Challenges. GAO-08-678. Washington, D.C.: June 23, 2008. Federal Disability Programs: More Strategic Coordination Could Help Overcome Challenges to Needed Transformation. GAO-08-635. Washington, D.C.: May 20, 2008. Highlights of a Forum: Modernizing Federal Disability Policy. GAO-07-934SP. Washington, D.C.: August 2007. Summary of a GAO Conference: Helping California Youths with Disabilities Transition to Work or Postsecondary Education, GAO-06-759SP. Washington, D.C.: June 20, 2006. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies, GAO-06-15. Washington, D.C.: October 21, 2005. Vocational Rehabilitation: Better Measures and Monitoring Could Improve Performance of the VR Program. GAO-05-865. Washington, D.C.: September 23, 2005. Federal Disability Assistance: Wide Array of Programs Needs to Be Examined in Light of 21st Century Challenges. GAO-05-626. Washington, D.C.: June 2, 2005. Workforce Investment Act: Labor Has Taken Several Actions to Facilitate Access to One-Stops for Persons with Disabilities, but These Efforts May Not Be Sufficient. GAO-05-54. Washington, D.C.: December 12, 2004. Special Education: Federal Actions Can Assist States in Improving Postsecondary Outcomes for Youth. GAO-03-773. Washington, D.C.: July 31, 2003.
The transition out of high school to postsecondary education or the workforce can be a challenging time, especially for students with disabilities. Multiple federal agencies fund programs to support these students during their transition. In 2003, GAO reported that limited coordination among these programs can hinder a successful transition. GAO was asked to provide information on the (1) challenges students with disabilities may face accessing federally funded transition services; and (2) extent to which federal agencies coordinate their transition activities. GAO reviewed relevant federal laws, regulations, and agency documents from Education, HHS, Labor, and SSA, which administer the key programs that provide transition services. GAO also administered a data collection instrument to gather program information from these agencies. Finally, GAO interviewed various stakeholders, including state and local officials, service providers, parents, and students with disabilities, in five states selected based on the number of federal grants they received to fund transition services. Students with disabilities face several longstanding challenges accessing services that may assist them as they transition from high school into postsecondary education or the workforce--services such as tutoring, vocational training, and assistive technology. Eligible students with disabilities are entitled to transition planning services during high school, but after leaving high school, to receive services that facilitate their transition they must apply as adults and establish eligibility for programs administered by multiple federal agencies. Students with disabilities may face delays in service and end up on waitlists if these programs are full. In addition, while all five states GAO contacted have taken steps to coordinate their transition services and assist families with the transition process, officials said that it is still difficult for students and their parents to navigate and for providers to coordinate services across different programs. Officials and parents GAO spoke with also noted a lack of sufficient information or awareness of the full range of service options available after high school on the part of students with disabilities, parents, and service providers. In addition, state and local officials said students with disabilities may not be adequately prepared to successfully transition to life after high school. This may be due, in part, to limited opportunities to engage in vocational and life skills training or obtain work experience while in school. The Departments of Education (Education), Health and Human Services (HHS), and Labor (Labor), and the Social Security Administration (SSA) coordinate transition activities to some degree, but their coordination has limitations and they do not assess the effectiveness of their efforts. One coordinating body involves all four agencies and focuses on transition services. However, that group's primary coordination activity is information sharing among staff-level representatives rather than developing common outcome goals and establishing compatible policies for operating across agencies. Agency officials told GAO that a lack of compatible outcome goals for transitioning students and differences in statutory eligibility criteria are among the barriers that hinder interagency coordination for this population. While agencies collaborate to some extent, their efforts represent a patchwork approach and there is no single, formal, government-wide strategy for coordinating transition services for students with disabilities. Moreover, it is unclear what impact coordination has on service provision because agencies do not assess the effectiveness of their coordination activities. To improve the provision of transition services for students with disabilities, GAO recommends that Education, HHS, Labor, and SSA develop an interagency transition strategy that addresses (1) operating toward common outcome goals for transitioning youth; (2) increasing awareness of available transition services; and (3) assessing the effectiveness of their coordination efforts. All four agencies agreed with the recommendation.
The Great Lakes Basin covers approximately 300,000 square miles, encompassing Michigan and parts of Illinois, Indiana, Minnesota, New York, Ohio, Pennsylvania, Wisconsin, and the Canadian province of Ontario (see fig. 1), as well as lands that are home to more than 40 Native American tribes. It includes the five Great Lakes and a large land area that extends beyond the Great Lakes, including their watersheds, tributaries, and connecting channels. Numerous environmental stressors threaten the health of the Great Lakes and adjacent land within the Great Lakes Basin. Decades of industrial activity in the region have left a legacy of contamination, such as from polychlorinated biphenyls (PCB), in the sediments that make up the beds of rivers and harbors in the Great Lakes Basin. In 1987, the United States and Canada identified a list of 43 severely degraded locations in the Great Lakes Basin as Areas of Concern—26 of which are located entirely in the United States; 5, shared by the United States and Canada; and 12, located entirely in Canada. As of May 2015, 4 of the Areas of Concern located entirely in the United States had been delisted, or removed, from the binational list. In addition, the fertile soil in the surrounding states makes them highly productive agricultural areas, resulting in large amounts of nutrients such as phosphorus and nitrogen—as well as sediment, pesticides, and other chemicals—running off into the Great Lakes. Moreover, large population centers on both sides of the U.S. and Canadian border use the Great Lakes to discharge wastewater from treatment plants, which also introduces nutrients into the Great Lakes. Even with progress in reducing the amount of phosphorus in the lakes in the 1970s, harmful algal blooms are once again threatening the Great Lakes Basin. The United States has long recognized the threats facing the Great Lakes and has developed agreements and programs to support restoration actions. For example, in 1972, the United States and Canada signed the Great Lakes Water Quality Agreement to restore, protect, and enhance the water quality of the Great Lakes to promote the ecological health of the Great Lakes Basin. In addition, in 2002, the Great Lakes Legacy Act authorized EPA to carry out sediment remediation projects in the 31 Areas of Concern located entirely or partially in the United States, among other things. In 2004, the Task Force agencies collaborated with governors, mayors, tribes, and nongovernmental organizations in the Great Lakes region in an effort referred to as the Great Lakes Regional Collaboration, which led to the development in 2005 of the Great Lakes Regional Collaboration Strategy to Restore and Protect the Great Lakes. More than 1,500 individuals participated in this effort. In 2009, the President created the Asian Carp Regional Coordinating Committee to coordinate efforts to prevent Asian carp from spreading and becoming established. Even with these actions, the Great Lakes are environmentally vulnerable. In 2009, the President proposed $475 million in his fiscal year 2010 budget request for a new interagency initiative to accelerate the restoration of the Great Lakes. Specifically, the President requested that EPA and its federal partners coordinate state, tribal, local, and industry actions to protect, maintain, and restore the integrity of the Great Lakes. Most recently, in 2015, multiple bills to authorize the GLRI were introduced in the House and Senate. Some of these bills, if enacted, would authorize $300 million to be appropriated annually to carry out the GLRI for fiscal years 2016 through 2020. When Congress made funds available for the GLRI in fiscal year 2010, the conference report accompanying the appropriations act directed EPA to develop a comprehensive, multiyear restoration action plan for fiscal years 2011 through 2014, to establish a process to ensure monitoring and reporting on the progress of the GLRI, and to provide detailed, yearly program accomplishments beginning in 2011. As discussed in our July 2015 report, in fiscal years 2010 through 2014, $1.68 billion of federal funds was made available for the GLRI, and as of January 2015, EPA had allocated nearly all of the funds, about $1.66 billion. Also, as of January 2015, Task Force agencies had expended $1.15 billion for 2,123 projects (see fig. 2). GLRI funds are available for obligation for the fiscal year the appropriation was made and the successive fiscal year. After these 2 fiscal years of availability, GLRI funds can be used for 7 additional years to expend and adjust those obligations. Task Force agencies conduct GLRI work themselves or by awarding funds to recipients through financial agreements, such as grants, cooperative agreements, or contracts. Potential recipients of GLRI funds include federal entities; state, local, or tribal entities; nongovernmental organizations; academic institutions; and others, such as for-profit entities, agricultural producers, or private landowners. A single GLRI project can involve multiple funding recipients. Table 1 shows the number of projects funded with GLRI funds made available in fiscal years 2010 through 2013 by the five agencies we reviewed in our 2015 report and type of recipient, as of July 2014. The type of GLRI funding recipients vary depending on the agency and financial agreements involved. For example, NOAA has entered into agreements with a variety of recipient types, with the exception of private landowners and agricultural producers. Funding recipients are responsible for reporting information to their funding agencies about the progress of their GLRI projects. As discussed in our September 2013 and July 2015 reports, in response to the conference report’s direction to develop a multiyear restoration action plan, in February 2010, the Task Force published the Fiscal Years 2010 to 2014 Great Lakes Restoration Initiative Action Plan (2010-2014 Action Plan) to guide the activities of the GLRI for those years. The 2010-2014 Action Plan was organized into five focus areas that, according to the Task Force agencies, encompassed the most significant environmental problems in the Great Lakes: (1) toxic substances and Areas of Concern; (2) invasive species; (3) nearshore health and nonpoint source pollution; (4) habitat and wildlife protection and restoration; and (5) accountability, education, monitoring, evaluation, communication, and partnerships. For each focus area, the 2010-2014 Action Plan included long-term goals, objectives to be completed within the 5-year period covered by the plan, and measures of progress—28 in total—that were designed to ensure that efforts are on track to meet the long-term goals. Each of the 28 measures included annual targets for fiscal years 2010 to 2014. The Task Force issued an updated Action Plan for 2015 to 2019 (2015-2019 Action Plan) in September 2014 to guide the GLRI for those years. The updated plan retains four of the focus areas of the 2010-2014 Action Plan, and the fifth focus area was modified and called “foundations for future restoration actions.” As we reported in September 2013, EPA assesses GLRI progress primarily by evaluating performance toward meeting the annual targets for the 28 measures of progress in the Action Plan. In our 2013 report, we found that the 2010-2014 Action Plan did not identify the links between a focus area’s goals, objectives, and measures of progress. That is, some of the goals and objectives in the Action Plan were not linked with any measures. We recommended that the EPA Administrator, in coordination with the Task Force as appropriate, identify linkages between long-term goals, objectives, and measures in the Action Plan for 2015 to 2019. In response to our recommendation, each focus area in the updated Action Plan is associated with two or three objectives and several measures of progress, clearly identifying the links between each objective and measure of progress. In response to the conference report’s direction to establish a process to ensure monitoring and reporting on the progress of the GLRI, EPA created the Great Lakes Accountability System (GLAS) in 2010 to collect information for monitoring GLRI projects and progress. In cooperation with the Task Force, EPA also created a GLRI website, to provide information to both the public and funding recipients about the GLRI program and GLRI projects. In September 2013, we found that the information on GLRI projects in GLAS may not be complete, which may prevent EPA from producing sufficiently comprehensive or useful assessments of GLRI progress. For example, GLAS limited users to submitting information about progress using a single measure of progress, while GLRI projects may directly address multiple measures. This prevented EPA from collecting and reporting complete progress information on each of the measures addressed by GLRI projects. As a result, we recommended that the EPA Administrator, in coordination with the Task Force, capture complete information about progress for each of the measures that are addressed by a project. In response to this recommendation, EPA modified GLAS to allow GLAS users to report information in GLAS about more than one measure of progress, beginning in January 2014. In July 2015, we found that some GLAS data were inaccurate, in part because recipients entered information inconsistently due to inconsistent interpretation of guidance, unclear guidance, or data entry errors. In May 2015, while we were completing our work for that report, EPA stopped using GLAS and began using the Environmental Accomplishments in the Great Lakes (EAGL) information system to collect GLRI project information and issued initial guidance for using EAGL. EPA officials told us that the agency created EAGL and, after consulting with Task Force agencies, conducted pilot tests of the system while we were completing our review of GLAS. After the pilot tests, in May 2015, EPA officials decided to use EAGL to collect information to monitor and report on GLRI progress, and they made the system available to Task Force agencies for an initial period of data entry. In our July 2015 report, we said that this is a good first step to resolving the data inconsistencies that we identified in GLAS, which resulted, in part, because of unclear or undocumented definitions, data requirements, and guidance about entering important data. However, as of that date, EPA had not yet established data control activities or other edit checks, although in commenting on a draft of the report, EPA stated that it planned to establish data control activities, such as verifications and documented procedures, for ensuring the reliability of the EAGL information system. Fully implementing the actions needed to address the reliability of GLRI project data should ensure that EPA and the Task Force agencies can have confidence that EAGL can provide complete and accurate information. EPA officials told us that the agency plans to use the initial data entry period to solicit feedback from the Task Force agencies in order to make changes to EAGL and the user guidance. The officials said their goal is to have EAGL ready for data entry at the beginning of fiscal year 2016. As we reported in July 2015, in response to the conference report’s direction to provide detailed, yearly program accomplishments beginning in 2011, EPA and the Task Force released two accomplishment reports in 2013 and one in 2014 that provided overviews of progress under the GLRI for fiscal years 2010 through 2012. These reports included summary accomplishment statements for each of the five focus areas from the 2010-2014 Action Plan, as well as specific performance information for many of the 28 measures of progress in the 2010-2014 Action Plan. The process for identifying each agency’s GLRI work and share of GLRI funding has evolved since fiscal year 2010 to emphasize interagency discussion. As discussed in our July 2015 report, EPA officials described four steps that Task Force agencies generally followed to identify GLRI work and funding, and the five agencies we reviewed followed these steps. The steps are as follows: Agency identification of GLRI work. EPA officials said that during the first step, each agency conducted an internal analysis to identify GLRI work that they wanted to conduct, either themselves or through other entities, within a fiscal year. Task Force agreement on scope and funding for agencies’ work. In the second step, the five agencies we reviewed held discussions with the Task Force and agreed on the work that would be done in a given fiscal year, as well as the amount of GLRI funds that would be needed to conduct that work. In general, once the agencies made a final determination of the work they would conduct in a fiscal year, and the GLRI funds that would be made available, each agency entered into an interagency agreement with EPA to transfer GLRI funds from EPA to the agency. Solicitation of proposals for projects designed to carry out agencies’ GLRI work. In the third step, agencies solicited project proposals from potential recipients to conduct the work identified in the second step. Project proposals were generally solicited through an announcement, such as a request for applications, posted on an agency’s website or in other ways, such as by e-mail. Requests for applications included criteria that the agency would use to rank applications and select projects, among other things. Selection of projects. In the fourth step, agency officials evaluated project proposals and selected the projects they would fund. Officials from the Task Force agencies we reviewed generally described similar processes for evaluating project proposals. Specifically, they said that agency officials with the appropriate expertise reviewed and ranked proposals against information in the request for applications and selected the best scoring projects for funding. The process for identifying each agency’s annual GLRI work and share of GLRI funding has evolved from one in which project and funding decisions were made on an agency-by-agency basis to one in which subgroups formed of multiple agency officials discuss and decide on what work should be done. According to EPA officials, for fiscal years 2010 and 2011, the Task Force and the five agencies agreed on work that each agency would do on an agency-by-agency basis. Officials from the agencies said that they identified work based on existing plans and worked with the Task Force to determine the work the agencies would do and the funds the agencies should receive. Beginning with fiscal year 2012, the Task Force began emphasizing interagency discussions as it created three subgroups made up of federal agency members, one subgroup for each of three priority issues. The three priority issues, which aligned with three of the five focus areas in the 2010-2014 Action Plan, were (1) cleaning up and delisting Areas of Concern located entirely or partially in the United States, (2) preventing and controlling invasive species, and (3) reducing phosphorus runoff that contributes to harmful algal blooms. For example, the Areas of Concern subgroup considered how close each Area was to being delisted and what cleanup actions were needed for delisting, as identified by the Area of Concern managers, among other things. Overall, the Task Force set aside a total of $180 million of the available GLRI funds to address the priority issues for fiscal years 2012 through 2014: $52.2 million in fiscal year 2012, $63.4 million in fiscal year 2013, and $64.7 million in fiscal year 2014. For 2015, EPA officials said that the Task Force began creating additional subgroups to identify work and funding for all five of the focus areas in the 2015-2019 Action Plan, not just the three priority issues. According to EPA officials, the focus on priority issues for fiscal years 2012 through 2014 accelerated restoration results for one of the three priority issues. Specifically, two of the Areas of Concern targeted for accelerated cleanup by the relevant subgroup were delisted in 2014. EPA announced in October 2014 that the White Lake and Deer Lake Areas of Concern had been delisted—both had been identified by the Areas of Concern subgroup for accelerated cleanup with priority issue funds—and EPA officials told us that they expect cleanup work to be completed at four other Areas of Concern in fiscal year 2015 as a result of receiving priority issues funds. In the 25 years before the three priority issues were identified, only one Area of Concern located entirely in the United States had been delisted. In addition, EPA officials said that identifying and funding the three priority issues for fiscal years 2012 through 2014 also allowed for continued success in invasive species prevention and resulted in some progress in reducing phosphorus runoff that contributes to harmful algal blooms. However, restoration results in those two priority issues are less clear than in the Areas of Concern priority issue, in large part because the factors contributing to those priority issues persist and are likely to continue into the future. In July 2015, we reported that the Task Force, as part of its oversight of GLRI, makes some information on GLRI projects available for Congress and the public in two ways: annual accomplishment reports and the GLRI website. The annual accomplishment reports included information about some, but not all, project activities and results. Specifically, we found that the accomplishment report for progress in fiscal year 2011 identified 10 GLRI projects, (2 projects in each of the five focus areas in the 2010-2014 Action Plan) and included some information about project activities and results for each project. For example, the report noted that the “Milwaukee River (Wisconsin)—restoring fish passage” project removed a dam, opening 14 miles of the river and 13.5 miles of tributaries to allow fish to move more freely, and reconnected the lower reach of the river with 8,300 acres of wetlands, improving water quality. The report provided similar information about nine additional projects. The accomplishment reports about GLRI progress in fiscal years 2010 and 2012 also included information about project activities and results, although most information was not associated with individual projects. For example, a statement from the accomplishment report for fiscal year 2012, “GLRI partners are implementing strategic invasive species control efforts that establish or take advantage of partnerships that will continue invasive species monitoring, maintenance, and stewardship beyond the duration of individual projects,” does not identify the specific projects where these efforts are taking place. EPA also made some information available on GLRI projects on the GLRI website, including a project’s funding agency, title, funding amount and year, recipient identification, focus area, and description. This information does not include GLRI project activities and results because the website is not designed to include it. Each of the five Task Force agencies we reviewed collected information on its projects, including activities and results of the projects they funded, although this information is not collected and reported by EPA. Overall, for the 19 projects we reviewed, recipients reported a variety of project activities, including applying herbicide, conducting training and workshops, and collecting data. In addition, we found that recipients reported a range of results. For example, funding recipients from 8 projects reported results that can be directly linked to restoration, such as increasing lake trout production, removing acres of invasive plant species, and protecting acres of marshland. For one of these projects, the Buffalo Audubon Society reported results needed to restore critical bird habitat, such as planting 3,204 plants and removing invasive species, among other results. For another project, the Great Lakes Fishery Commission reported results in the form of improved methods for capturing sea lamprey, an invasive species, which is a parasite that was a major cause of the collapse of lake trout, whitefish, and chub populations in the Great Lakes during the 1940s and 1950s. According to a Great Lakes Fishery Commission official, the results from this project will help to further suppress sea lamprey production in the Great Lakes, thereby reducing the damage they cause to native and desirable species. For example, a single lamprey can kill up to about 40 pounds of fish in its lifetime. For the 11 remaining projects, recipients reported results that can be indirectly linked to restoration; that is, the results may contribute to restoration over time. These included results such as simulations and data for helping decision makers make better restoration decisions in light of climate change, as well as education and outreach tools to increase awareness of invasive species. In addition, a University of Wisconsin- Madison representative told us that the university’s project to improve applied environmental literacy, outreach, and action in Great Lakes schools and communities through train-the-trainer professional development institutes can contribute to restoration. Progress reports for the university’s project noted that the project resulted in more than 110 school teams that guided students in restoration, service learning, inquiry, and citizen science monitoring during the 2013-2014 school year, among other things. The representative said that this contributed to restoration because participating students have implemented conservation practices, such as building rain gardens that slow stormwater runoff and remove contaminants from polluted runoff. Chairman Gibbs, Ranking Member Napolitano, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions you may have at this time. If you or your staff members have any questions about this testimony, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Susan Iott (Assistant Director), Mark Braza, John Delicath, Carol Henn, Kimberly McGatlin, Jeanette Soares, Kiki Theodoropoulos, and Michelle K. Treistman also made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Great Lakes, which contain much of North America's freshwater supply, provide economic and recreational benefits to millions of people. They face significant stresses, however, that have caused ecological and economic damage. Decades of industrial activity in the region, for example, left a legacy of contamination that resulted in the United States and Canada identifying, since 1987, 43 Areas of Concern. The GLRI was created in 2010 to, according to EPA, accelerate efforts to protect and restore the Great Lakes. It is overseen by a Task Force of 11 federal agencies that is chaired by the EPA. EPA was directed, in a conference report, to develop a restoration action plan, establish a process to ensure monitoring and reporting on progress, and provide detailed yearly accomplishments. This testimony is based on GAO reports issued in September 2013 and July 2015. It focuses on (1) GLRI funding, action plans, and reports; (2) the process used to identify GLRI work and funding; and (3) information available about GLRI project activities and results. For the 2015 report, GAO reviewed a sample of 19 GLRI projects funded by the five Task Force agencies that received the majority of GLRI funds, among other things. As GAO reported in July 2015, of the $1.68 billion in federal funds made available for the Great Lakes Restoration Initiative (GLRI) in fiscal years 2010 through 2014, nearly all had been allocated as of January 2015. Of the $1.66 billion allocated, the Environmental Protection Agency (EPA) and the other 10 Great Lakes Interagency Task Force (Task Force) agencies expended $1.15 billion for 2,123 projects (see fig.). ____________________________________________________________________ Status of GLRI Funds, FY 2010-2014 Task Force agencies can either conduct work themselves or enter into financial agreements, such as grants, cooperative agreements, or contracts with others, such as federal entities; state, local, and tribal entities; nongovernmental organizations; and academic institutions. To guide restoration work, EPA and the Task Force have developed two consecutive multiyear restoration action plans. EPA also created a process to ensure monitoring and reporting on the progress of the GLRI, and EPA and the Task Force issued three accomplishment reports. The process to identify each agency's GLRI work and funding has evolved to emphasize interagency discussion. In fiscal year 2012, the Task Force created subgroups to discuss and identify work on three issues: cleaning up severely degraded locations, called Areas of Concern; preventing and controlling invasive, aquatic species that cause extensive ecological and economic damage; and reducing nutrient runoff from agricultural areas. EPA officials said that the Task Force created additional subgroups to identify all GLRI work and funding in 2015. In July 2015, GAO found that the Task Force has made some information about GLRI project activities and results available to Congress and the public in three accomplishment reports and on its website. In addition, the individual Task Force agencies collect information on activities and results, although this information is not collected and reported by EPA. Of the 19 projects GAO reviewed, 8 reported results directly linked to restoration, such as improved methods for capturing sea lamprey, an invasive species that can kill up to about 40 pounds of fish in its lifetime. The remaining 11 reported results that can be indirectly linked to restoration; that is, the results may contribute to restoration over time. These included results such as simulations and data for helping decision makers make better restoration decisions in light of climate change, as well as education and outreach tools to increase awareness of invasive species. GAO recommended in 2013 that EPA improve assessments of GLRI progress, among other things. EPA agreed and has taken several actions. GAO is not making any recommendations in this testimony.
DOD invests about $12 billion in funding to support its science and technology community, which it relies upon to identify, pursue, and develop new technologies to improve and enhance military capabilities. This community is comprised of DOD-wide research agencies, including DARPA, as well as military service research agencies and laboratories, test facilities, private industry, and academic institutions, and is overseen by the Office of the Assistant Secretary of Defense for Research and Engineering. The research and development activities these different components engage in are intended to produce mature technologies that DOD can integrate and deliver in systems that support its warfighters. This integration process, known as product development, represents the handover of breakthrough technologies from DOD’s science and technology community to its acquisition community. Although not precisely defined, technology transition generally occurs at the point when advanced technology development ends and this new product development begins. Figure 1 illustrates DOD’s technology management process. DOD has long noted the existence of a chasm between its science and technology community and its acquisition community that impedes technology transition from consistently occurring. This chasm, often referred to by department insiders as “the valley of death,” exists because the acquisition community often requires a higher level of technology maturity than the science and technology community is willing to fund and develop. In 2007, DOD reported that this gap can only be bridged through cooperative efforts and investments from both communities, such as early and frequent collaboration among the developer, acquirer, and user. We have also reported extensively on shortfalls across DOD’s technology management enterprise in transitioning technologies from development to acquisition and fielding. In June 2005, we found that DOD technology transition programs faced challenges selecting, managing, and overseeing projects, and assessing outcomes. In September 2006, we found that DOD lacked the key planning, processes, and metrics used by leading commercial companies to successfully develop and transition technologies. More recently, in March 2013, we found that the vast majority of DOD technology transition programs provide technologies to military users, but tracking of project outcomes and other benefits derived after transition remained limited. DARPA’s scientific investigations run the gamut from laboratory efforts to the creation of full-scale technology demonstrations in the fields of biology, medicine, computer science, chemistry, physics, engineering, mathematics, material sciences, social sciences, neurosciences, and more. The agency solicits proposals for research work in support of its scientific endeavors through broad agency announcements. These solicitations seek thought leaders and technological pioneers that can leverage new ideas in science to advance the state of the art beyond the practical application of knowledge. Non-DARPA entities respond to broad agency announcements by submitting proposals for executing work to meet the agency’s stated needs. DARPA reviews those proposals based on technical merit, and entities receiving awards are thereafter referred to as performers. To execute solicitations, awards, and program oversight, DARPA relies on approximately 220 government employees, including nearly 100 program managers. Program managers report to DARPA’s office directors and their deputies, who are responsible for charting the strategic directions of six technical offices. The technical staff is supported by experts in security, legal and contracting issues, finance, human resources, and communications. DARPA’s Director and Deputy Director approve new programs and lead scientific and technical reviews of ongoing programs, while setting agency-wide priorities and ensuring a balanced investment portfolio. Currently, DARPA has about 250 ongoing research and development programs in its portfolio. The 10 recently completed programs that we reviewed for this report together spanned a broad range of research areas, including communications, navigation, and health and marine sciences. Table 1 highlights the research focuses of these 10 programs in more detail. Since 2010, DARPA has had success in transitioning new technologies from the research environment to military users, including DOD acquisition programs and warfighters. DARPA maintains a portfolio-level database that identifies these outcomes by program. However, the agency’s process for tracking technology transition outcomes is not designed to capture transitions that occur after a program completes and does not provide DARPA with an effective means for updating its database. We used outputs from this database to select 10 case study programs, but later identified inconsistencies affecting three programs in how transition outcomes were reported in the portfolio-level database versus how they were reported in other program documentation. We then concluded that DARPA’s portfolio-level database was unreliable for assessing transition rates and outcomes since fiscal year 2010. Our analysis of the 10 selected programs did, however, identify four factors that contributed to transition successes, the most important of which were military or commercial demand for the planned technology and linkage to a research area where DARPA has sustained interest. DARPA’s technological approach focuses on radical innovation that addresses future warfighting needs, rather than developing technologies that address current warfighting needs. This approach shapes how the agency defines, pursues, and tracks technology transition. DARPA considers a successful transition to be one where its program, or a portion of its program, influences or introduces new knowledge. This knowledge is often passed through program performers, which DARPA relies on to execute technology development in its programs. Typical performers include commercial enterprises; other DOD entities, such as military service laboratories and research agencies; and academic institutions. Further, DARPA generally does not develop technologies to full maturity. Instead, the agency focuses on demonstrating the feasibility of new technologies, which includes verifying that the concepts behind the technologies have potential for real life applications. As a result, most DARPA technologies require additional development before they are ready for operational or commercial use. Therefore, follow-on development is the predominant path of technology transition at DARPA. Table 2 highlights the different technology transition paths that DARPA technologies can take. DARPA’s definition of what constitutes technology transition reflects one of many in use within DOD. In June 2005, the Office of the Deputy Under Secretary of Defense for Advanced Systems and Concepts in collaboration with the Defense Acquisition University (DAU) published guidance defining technology transition as “the use of technology in military systems to create effective weapons and support systems—in the quantity and quality needed by the warfighter to carry out assigned missions at the ‘best value’ as measured by the warfighter.” However, DOD officials told us the 2005 guidance is outdated, does not constitute department policy, and should only be considered as a useful reference source. In the absence of current DOD policy, in a March 2013 report we identified three communities that DOD technologies typically transitioned to: acquisition programs; directly to the field for use by the warfighter; and to other users such as science and technology organizations, test and evaluation centers, or industry. The communities we identified in 2013 are similar to the transition outcomes listed in the 2005 guidance, which broadly lists commercialization, acquisition program, and follow-on development by the prime contractor as primary pathways of technology transition. In a subsequent report in December 2013, we found further differences among what the military services define as technology transition and additional confirmation that DOD itself lacks a formal definition for technology transition across the department. These variations, in tandem with the absence of a standard DOD-wide definition of technology transition, prevents the military services, DOD research agencies, and other DOD entities from consistently defining and tracking technology transition. This lack of a formal definition of technology transition means that DOD entities, such as DARPA, are free to define and categorize technology transition for themselves. Following a program’s completion, DARPA officials identify and record transition outcomes in accordance with the technology transition paths identified in table 2. DARPA collects this information within a portfolio- level database that spans all of its recently completed programs. The agency uses this database primarily to provide incoming program managers with training on potential transition opportunities. Figure 2 illustrates in more detail DARPA’s process for assessing technology transition outcomes in its programs. DARPA’s process for tracking technology transition outcomes is not designed to capture transitions that occur after a program completes and the agency’s agreements with program performers have ended. After this point, however, program performers often continue to develop their technologies using non-DARPA sources of funding. According to DARPA officials, these efforts can result in later transitions of technologies to commercial products—including ones that are sold back to DOD for military use—without the agency’s knowledge. This process for tracking technology transition outcomes also does not provide DARPA with an effective means for updating its portfolio-level database. We used outputs from this database to select 10 case study programs (5 that transitioned and 5 that did not transition), but later identified inconsistencies affecting three programs in how transition outcomes were reported in the portfolio-level database versus how they were reported in other program documentation that we reviewed. This confusion about ultimate transition outcomes persisted during our interviews with DARPA officials. As a result, we concluded that DARPA’s portfolio-level database was unreliable for assessing transition rates and outcomes since fiscal year 2010. Table 3 highlights the inconsistencies we found in our reviews. The inconsistencies we identified suggest that DARPA’s current approach to tracking technology transitions can limit its understanding of transition outcomes. This may undermine its ability to craft transition plans for new programs based on the lessons learned from previous programs. We have previously identified technology transition tracking as a longstanding issue at DOD. For example, in September 2006, we found that tracking technology transitions and the effect of transitions, such as cost savings or deployment of the technology in a product, provided key feedback that can inform the future management of programs. However, in March 2013, we found that DOD stopped tracking transition outcomes in many programs once a program stopped receiving funding, which consequently limited visibility into the extent of successful transitions within the DOD portfolio. DARPA has undertaken efforts to understand the elements that contribute to or impede successful technology transitions. According to DARPA officials, a technology’s maturity level, availability of military service funding, alignment with military service requirements, and transition planning by the program manager influence whether or not a DARPA- developed technology successfully transitions. These characteristics align with the findings of a 2001 DARPA-funded study, which reported that mission, program manager turnover, timing, funding, and regulations, among other elements, affect transition success. In our review of 10 case study programs, we found different, but related, factors for transition success as compared to the ones put forward by DARPA: (1) military or commercial demand for the technology, (2) linkage to a research area where DARPA has sustained interest, (3) active collaboration with potential transition partners, and (4) achievement of clearly defined technical goals. Based on our analyses, we identified two factors—military or commercial demand for the planned technology and linkage to a research area where DARPA has sustained interest—as factors that were generally evident at program initiation and were most important to transition. The remaining two factors—active collaboration with potential transition partners and achievement of clearly defined technical goals—sequentially follow the first two factors and become observable once a program is underway. Figure 3 highlights these four factors. In reviewing the 10 programs, we found that the existence of the factors identified varied from program to program. We assessed the extent to which the four factors were present within the 10 programs we reviewed, and table 4 highlights these results. We found that successful transitions were often underpinned by existing military or commercial demand for the technology. DARPA officials told us that all of the agency’s programs are linked to military and joint service needs at a high level, but through our analyses, we found that this commitment was exemplified when any of the following components were present in the program files: Agreement between DARPA and (1) a military service, (2) a DOD research agency or laboratory, or (3) other warfighter representative that a related military capability gap or requirement exists; or A private company identified a commercial demand for the technology or showed an interest in commercializing it. For example, the Spoken Language Communication and Translation System for Tactical Use (TRANSTAC) program addressed a known capability gap for speech translation technology within the Army. As a result, the Army developed the appropriate requirements documents that allowed the technology to successfully transition to an Army acquisition program of record. These documents identified desired performance attributes and system parameters, which served to better define and communicate the Army’s need for TRANSTAC. The Army’s decision to validate specific performance requirements provided TRANSTAC an opportunity to transition into an Army program of record. Within our 10 case studies, we found that a military or commercial demand was fully present within four of the five programs that successfully transitioned. In two cases—TRANSTAC and Quint Networking Technology (QNT)—near-term military demand was a result of DOD’s ongoing involvement in warfighting operations. However, in the other two cases, an immediate military need for the technology was not as prevalent. A fifth program that transitioned, Advanced Wireless Networks for Soldier (AWNS), initially was in demand by the Army, but interest waned over time as other options for radio networking platforms emerged. In addition, several programs developed technologies that demonstrated military applicability but lacked a military or commercial demand, which precluded successful transition. For example, the Predicting Health and Disease (PHD) and Nastic Materials programs successfully demonstrated innovative research concepts that had potential military applications, but an immediate military/commercial demand simply did not exist without further maturation of technologies past the point of program completion. We also found that a program’s linkage to a research area in which DARPA has sustained interest often facilitated successful transition. This interest was demonstrated by evidence that in the years preceding the program’s initiation, at least two related DARPA or other related DOD science and technology programs had been completed. Sustained interest is also exemplified by a program’s reuse of existing research facilities and data from related programs, among other things. Of our 10 case studies, all 5 programs that successfully transitioned were fully linked to sustained research interests, whereas 4 of the 5 non- transitioning programs did not have any such linkage. DARPA’s program portfolio is currently organized around 10 research focus areas under four key research themes. DARPA officials report that the Hypersonics Capability focus area, for example, reflects an ongoing interest for the agency that dates back to the mid-1980s. The Falcon Combined-cycle Engine Technology (FaCET) is one of several recent DARPA programs within the Hypersonics Capability focus area. In addition, FaCET’s research was done in concert with other hypersonic programs within DOD. As a result of this sustained interest, FaCET technologies transitioned to other hypersonics programs, including DARPA’s Mode Transition program, the joint DARPA/Air Force Hypersonic Air-breathing Weapon Concept program, and the Air Force Research Laboratory’s Robust Scramjet and Enhanced Operability Scramjet Technology. Moreover, due to the National Aeronautics and Space Administration’s (NASA) sustained involvement in FaCET, technologies were also transitioned to NASA’s Glenn Research Center’s Combined-Cycle Engine Large Scale Inlet Mode Transition Experiment program. We found that in all five cases where transition occurred, active collaboration with potential transition partners was fully present. This collaboration generally consisted of early program involvement by stakeholders within the government and commercial sectors, service requirements officials, and military liaison officers, among others. DARPA program managers were responsible for facilitating this early stakeholder involvement, including identifying the potential transition partners needed to assist with their programs. According to DARPA officials, achieving active collaboration with potential transition partners is highly dependent on the nature of the program and background of the program manager, which might be in academia, private industry, or military services. For example, a program manager with a military background might be familiar with DOD’s acquisition process and have connections with service officials who can facilitate transition. On the other hand, a program manager with an academic background might lack DOD service connections, in which case DARPA’s military liaison officers can be used to facilitate collaboration. DARPA’s Architecture for Diode High Energy Laser Systems (ADHELS) program exemplifies how active collaboration with potential transition partners can facilitate successful technology transition. ADHELS development included several technological components, including volume bragg grating (VBG) technology. VBG is a transparent device made of refractive glass that when combined with a diode laser can control the laser output—such as by magnifying laser power, narrowing a laser beam, or controlling the beam quality of the laser diode. According to DARPA officials, the agency contracted with the foremost experts on VBG technology to develop ADHELS components, recognizing that adaptations of the VBG technology had potential applications within the commercial marketplace. As ADHELS development progressed, DARPA continued to engage its performers, who then licensed the VBG technology to an ADHELS subcontractor. This subcontractor formed the commercial entity Optigrate to further develop the VBG technology for commercial sale. Conversely, the programs that lacked active collaboration with potential transition partners encountered challenges such as funding shortfalls, requirements uncertainties, and underperforming technologies. For example, early technical challenges prompted DARPA to restructure the Self-Regenerative Systems (SRS) program to focus exclusively on technology maturation, canceling initial plans to demonstrate and evaluate SRS technologies on a transition partner’s system. This decision constrained opportunities to identify potential transition partners and actively collaborate with them during the program. We found that defining and, ultimately, achieving clear technical goals helped facilitate technology transition. Of the five programs that successfully transitioned, this factor was fully present in three programs and partially present in the remaining two. Clearly defined technical goals often existed in the form of documented agreements among stakeholders that outlined technical specifications and desired capabilities, funding requirements, development schedule, and organizational responsibilities for technology development. These agreements allowed DARPA to share development, management, and funding responsibilities with its service partners, which facilitated shared understanding of technical goals and mutual commitments to the program’s success and transition. Equally important to this factor though was the degree to which a program achieved its stated technical goals. Most of the programs we reviewed identified clear technical goals, but fewer than half actually achieved the technical goals that were originally set. DARPA’s QNT program represents one example where clearly defined technical goals were set and achieved. QNT was initiated with support from the Air Force and Navy, which helped DARPA craft clear technical goals including size, weight, robustness, transmission rates, and other performance attributes of the technology. Defining technical goals during the early stages of the program also secured each organization’s commitment to playing a role in managing, developing, funding, demonstrating, and testing QNT. As a result, stakeholders then worked together to test QNT technical performance at several military exercises and in theater, where the system performed to expectations and gained added exposure within DOD. Ultimately, QNT transitioned to the Army’s Intelligence, Surveillance, and Reconnaissance Network program, which fielded the system in Afghanistan in September 2011. QNT also transitioned to two Navy weapons programs and was also selected by the Air Force for use in its Battlefield Airborne Communications Node program, which hosts a data link communications system between aircraft and ground units. In other cases, such as AWNS, ADHELS, and Nastic Materials, technical goals were clearly defined, but only partially met. These partial successes, nonetheless, produced substantive technological gains. In the cases of AWNS and ADHELS, these gains—coupled with the presence of other key factors—proved sufficient to promote technology transition. On the other hand, three programs lacked clearly defined goals—or did not substantively achieve those goals—which led to significant restructuring or development of technologies that did not align with the needs of a planned transition partner. For example, Marine Corps’ officials stated that the Tactical Underwater Navigation System relied on divers swimming at unsustainable speeds to calibrate its positioning, which was not responsive to their interests. DARPA’s investment of program funds and staff are primarily focused on the highest priority of its agency mission, which is creating radically innovative technologies that support DOD’s warfighting mission. Technology transition is a secondary priority at the agency. DARPA leadership conducts periodic reviews of agency programs, but these reviews are focused on scientific and technical aspects of the programs and do not assess technology transition strategies. Instead, the Director, DARPA, delegates responsibility for oversight and assessment of technology transition strategies to a subordinate office. DARPA also provides limited training to program managers related to technology transition, instead relying on others within the agency to assist program managers with this activity, as needed. In addition, although DARPA disseminates information on its past programs, it does not take full advantage of available, government-sponsored resources for sharing technical data. DARPA has also elected not to participate in most DOD programs intended to facilitate technology transition, with the exception of mandated small business programs, citing the challenges it perceives in meeting the process and reporting requirements of these DOD programs within DARPA’s typical timeframes for executing its research initiatives. At DARPA, the desire for innovation drives investment, both in terms of recruitment and programs. DARPA hires world-class scientists and engineers from private industry, universities, government laboratories, and research centers to serve as program managers. According to DARPA officials, program managers are given great flexibility in leading their programs, building their teams, and allocating funds to achieve their programs’ objectives, including technology transition. DARPA officials stated that these expectations are outlined to program managers during new hire orientations, but are not codified in any agency-wide policy or guidance. To ensure that new ideas for advanced technologies are continuously coming into DARPA, the agency usually limits the tenure of its program managers, as well as the duration of its programs, to 3 to 5 years. In this environment, program managers prioritize achieving programs’ technical objectives, which can require the overwhelming majority of their available time. This focus on innovation, which corresponds with undertaking bold, ambitious programs, makes the pursuit of technology transition a secondary priority for the agency. Consequently, programs generally seek to prove the art of “what is possible” rather than refining, producing, and delivering tactical equipment to warfighters. According to DARPA officials, the agency views these latter processes as the responsibility of military service research agencies, laboratories, and acquisition programs of record. However, DARPA officials report that potential transition partners in the acquisition community are often unwilling to commit to incorporating new technology into their programs of record without additional maturation, and service research agencies and laboratories both have their own programs and priorities to pursue. Consequently, the additional maturation work needed to position DARPA programs for effective transitions can go unfunded. According to DARPA officials, this dynamic has proven to be a major impediment for the agency in transitioning technology. In addition, the introduction of DARPA’s radically innovative technologies can disrupt the status quo for military programs, budgets, and warfighting doctrine, which can drive cultural opposition within the military services. DARPA officials stated that the agency’s research sometimes leads to the identification of technologies and capabilities that military service officials do not initially want or think their services will need, although these technologies can eventually provide important military capabilities. For example, DARPA officials said that the Air Force was initially highly resistant to investments in stealth technologies for aircraft. Despite this resistance, DARPA proceeded with the development of stealth technologies, and today they are in use on multiple DOD weapon systems, including the F-22 Raptor and F-35 Lightning II fighter aircraft. DARPA’s secondary emphasis on transition is a long-standing characteristic of the agency’s culture, as evidenced in studies commissioned by DARPA in 1985 and 2001, which found that the agency does not place enough emphasis on technology transition. The 1985 report recommended that DARPA designate full-time technology transition facilitators, due to problems that were identified in the transition of technologies to the military services. The 2001 report recommended matching program manager tenure to the expected length of the programs to which they are assigned—rather than setting arbitrary dates of departure—and defining additional training and incentives for technology transition. According to DARPA officials, the Director, DARPA, has undertaken several initiatives to improve the agency’s emphasis on technology transition, including transition-focused quarterly meetings with each of the military service chiefs or their deputies and establishment of the Adaptive Execution Office in 2013, which was chartered to accelerate the transition of game-changing DARPA technologies into DOD capabilities. In addition, DARPA officials stated that the Director has shifted the role of the agency’s military service liaisons to focus exclusively on assisting program managers with military service engagement and transition of DARPA technologies. DARPA officials report that these actions have elevated the priority of and resources devoted to technology transition within the agency. The Director, DARPA, conducts oversight of programs through periodic milestone reviews. These reviews assess a program’s scientific and technical merit, and, according to DARPA officials, provide the Director with information on the transition status of the program. According to DARPA officials, the scope of these reviews is reflective of and consistent with the agency’s top priority of creating innovative technologies. However, these reviews do not assess a program’s strategy for achieving technology transition. DARPA leadership delegates oversight and review of technology transition strategies to the agency’s Adaptive Execution Office, which coordinates with program managers to review and provide input on technology transition strategies, particularly in the latter stages of programs. DOD policy, however, assigns to the Director, DARPA, the responsibility to pursue “strategies” that “increase the impact of DARPA’s research and development programs” and “speed the transition of successful research and development programs to the military departments and defense agencies,” among other scientific and technical functions. Consequently, by not assessing technology transition strategies at the program milestone reviews it chairs, the Director, DARPA, is forgoing key opportunities to perform this function. This approach undermines transition planning and introduces risk that DARPA programs will not achieve their full transition potential. Apart from the policy cited above, the Office of the Secretary of Defense does not maintain other instructions or directives related to technology transition at DARPA. In previous years, different components within the Office of the Secretary of Defense have issued nonmandatory guidance on technology transition, which has, at times, applied to DARPA programs. However, the guidance is now outmoded in that it does not address changes in key science, technology, and acquisition processes that have occurred during the last 10 years. The Office of the Assistant Secretary of Defense for Research and Engineering, which has primary oversight of DARPA and other DOD research agencies, provides a great deal of latitude to these agencies to define their own technology transition policies and procedures. Most notably, officials from this office stated to us that technology transition is no longer an explicit function of the office and that the DOD division formerly responsible for technology transition no longer exists. Instead, the office now limits its technology policy responsibilities to minimizing unnecessary duplication of research efforts within DOD, disseminating research knowledge throughout DOD, and sharing that information with the general public. DARPA’s program managers receive limited training on how to effect technology transition in their programs. This training consists primarily of overviews on DARPA’s technology transition paths and considerations to make at program milestones with respect to technology transition. DARPA program managers are not subject to the formal training and certification requirements applicable to permanently hired science and technology managers at military service laboratories. DOD requires managers in these laboratories to complete Defense Acquisition University (DAU) training courses in science and technology, including how they apply to technology transition. These courses lead to progressively higher knowledge and certifications over their careers. DARPA officials countered that the training necessary to complete these courses and achieve science and technology manager certifications would require DARPA program managers to devote an inordinate amount of time to training, particularly if DAU requires DARPA staff to complete all the typical prerequisite courses that other managers are required to complete. Further, given the agency’s unique mission, DARPA officials stated that they do not consider the DAU training courses to be as relevant to their program managers given the agency’s broad discretion to pursue breakthrough technologies versus specific management of acquisition programs. In lieu of more robust training, DARPA officials stated that the agency supports its program managers’ transition efforts by providing them with access to various transition planning and outreach resources. For example, DARPA program managers are also supported by the agency’s Adaptive Execution Office, which provides them with assistance in developing their transition plans and in communicating with the military’s transition stakeholders at DOD’s combatant commands. Program managers are further supported by DARPA’s military liaison officers from the Air Force, Army, Navy, and Marine Corps, who help program managers identify and reach out to potential transition partners or other stakeholders in the military services. These liaisons also arrange for DARPA leadership to meet with the military’s senior leaders, which is done in part to advocate for the transition of DARPA programs to the military services. As we found in September 2006, these liaisons can also provide operational advice for planning and strategy development and provide an understanding of service perspectives, issues, and needs so that potential customers can be identified and effective agreements can be written. Program managers also are authorized to use program funds to hire experienced contractors and government staff from other agencies to aid technology transition activities in their programs. Previous guidance and studies, including one commissioned by DARPA in 2001, have recommended that DARPA improve its technology transition training for program managers through additional training and mentoring programs related to technology transition. Further, in 2005, DOD issued guidance on technology transition stating that developing and executing a training plan for the members of the team supporting technology transition is essential to their success. Similarly, Standards for Internal Control in the Federal Government indicates that effective management of an organization’s workforce, which includes providing necessary training to the organization’s staff, is essential to achieving results. DARPA’s limited training for program managers on technology transition is inadequate to consistently position programs for transition success. Without sufficient training, program managers may not develop the skills and knowledge that they need to identify and engage potential transition partners and facilitate transition successes. While DARPA does not currently rely on other DOD entities for technology transition training, individual DARPA program managers may voluntarily elect to take training related to technology transition in DOD or other federal organizations. For example, the Federal Laboratory Consortium for Technology Transfer was established by law in the Federal Technology Transfer Act of 1986 to, among other things, (1) develop training for federal lab employees engaged in technology transfer and (2) facilitate communication and cooperation between federal laboratories. The Consortium offers both in-person and online training regarding commercialization of technologies, as well as guidance regarding best practices. In response to our inquiries on this subject, DARPA officials indicated they were in discussions with the DAU staff regarding potential future training options. Disseminating information regarding developed technologies is a way for agencies to promote technology transition after the conclusion of a program, particularly once program managers and staff are no longer actively advocating for the transition of their program’s technologies. For many years, DOD has maintained website-accessible databases that disseminate information within the department, and to a lesser extent, to the public and to private companies. These websites allow their users to search for related technologies while considering new programs or products that could possibly use them. While DARPA disseminates information on past programs through the use of public government websites, its selective approach to posting this information does not maximize the chances of DARPA technologies being identified and selected by potential transition partners. Currently, DARPA disseminates information on past programs through both internal and external means, but does not share information with key data repositories that the federal government sponsors, which may obscure visibility into its programs and lead to missed transition opportunities. Since the 1960s, DARPA has provided substantial amounts of information regarding its technologies to the official DOD dissemination website managed by the Defense Technical Information Center (DTIC). Although the majority of DARPA-related information in this database is restricted to DOD staff, it is by far the largest repository that private companies and the general public can access for information on DARPA technologies. For instance, we found that while non-DOD users can access approximately 3,600 DARPA technical records through DTIC’s public website, DOD users can access over 30,000 of these records. DARPA also maintains an “Open Catalog” public website for disseminating information on its programs, although it currently only has technical information on about a few dozen active and completed programs. In comparison, DARPA’s public website also provides brief, non-technical descriptions of 194 active DARPA programs. Two other government-sponsored websites, operated by the DOD TechLink public-private partnership and the Federal Laboratory Consortium for Technology Transfer, also exist to help science and technology agencies disseminate technology information. DARPA officials indicated they do not share information with either of these entities and instead exclusively rely on DTIC, which DARPA officials stated represented DOD’s official repository. In recent years, the White House has provided direction to broaden access to non-sensitive information on government-developed technologies, in recognition of government research’s potential for catalyzing innovative breakthroughs that drive the U.S. economy, and helping to drive progress in areas such as health, energy, the environment, agriculture, and national security. In February 2013, the White House’s Office of Science and Technology Policy instructed the federal government’s science and technology community to begin planning how to disseminate information on technologies they have developed. According to DARPA officials, the lead DOD agency for implementing this system is DTIC, and they do not expect DOD to have a dissemination system in place that fully addresses the requirements of the memorandum until 2017. The Office of the Secretary of Defense manages several DOD programs intended to accelerate development, testing, and delivery of mature technologies that provide new solutions for military needs. The general purpose of these programs is to facilitate the transition of technologies, but vary in terms of what types of technology developers and operational needs they target. For example, in partnership with the military services, the Joint Capability Technology Demonstration program addresses joint warfighting needs of the combatant commands by demonstrating mature technology prototypes that may transition to acquisition programs or directly to the warfighter in the field. Other programs such as the Small Business Innovation Research (SBIR) program fund small business research and development with the goal that innovations produced will be commercialized and eventually sold back to DOD. Table 5 lists these programs. According to DARPA officials, the only programs that DARPA participates in are the SBIR and Small Business Technology Transfer (STTR) programs because it is legally required to do so. However, the agency’s knowledge of transition outcomes associated with SBIR and STTR expenditures is limited. DARPA officials said they do not maintain a comprehensive list of agency programs using SBIR or STTR funds—or the transition paths of technologies developed with these funds—because these data are not always reported or accessible. According to DOD small business program management officials, who oversee the use of these funds throughout the department, transition outcome data is not required from any military service or agency, including DARPA. These officials further stated that once a small business contract ends, DOD’s means for compelling contractors to identify and report on successful transitions expires. In December 2013, we recommended that DOD improve its tracking of technology transition outcomes in SBIR-funded programs by establishing a common definition of technology transition for all SBIR projects and improving the completeness, quality, and reliability of SBIR transition data that it reports. These tracking shortfalls precluded us from assessing the extent to which DARPA’s SBIR and STTR funds contribute to successful technology transitions. In lieu of comprehensive transition data, DARPA officials have worked with some of their prior program contractors—who successfully developed and transitioned technologies—to identify small business program success stories. In addition, DARPA officials stated that they are developing contract language for future SBIR awards that would require firms to identify their transition and commercialization outcomes as an addendum to their final report. Apart from the legally required small business programs, DARPA officials said that the processes and reporting requirements associated with participating in DOD’s other technology transition programs are generally cumbersome and do not align with DARPA’s time frames for executing programs or mission of creating disruptive technologies over relatively long periods of time. Conversely, DOD transition programs are mainly intended for mature technologies, or short-term efforts that can be fielded quickly. DARPA officials explained that technologies their programs develop usually require additional maturation in subsequent technology development efforts, either within DARPA or at military service laboratories, before transitioning to acquisition programs or warfighters. DARPA officials also said that agency leadership generally views the use of these funds as unnecessary given that DARPA’s budget currently provides adequate funding to support its research endeavors. DARPA officials also indicated that they are exploring stronger relationships with the Joint Capability Technology Demonstration (JCTD) program, particularly in the area of prototyping. In previous decades, DARPA used funds from the predecessor to the JCTD program—then known as the Advanced Concept Technology Demonstration program—to develop and demonstrate technologies. These efforts include currently fielded systems such as the Air Force’s Global Hawk and Predator unmanned aircraft and Miniature Air-Launched Decoy systems. DARPA officials also stated that Manufacturing Technology program funds have been applied after DARPA program completions to improve the affordability of and manufacturing base for semiconductors developed by DARPA. Technology transition does not have to occur at the expense of innovation, but should instead be viewed as a natural extension of innovation. When DARPA places technology in the hands of a user, operational knowledge is gained that can be used to improve the technology and further scientific innovation. However, DARPA leadership does not fully subscribe to this viewpoint; instead, it is satisfied with maturing technology to the point where feasibility, but not functionality, is proven. Today, programs progress through DARPA without the agency head fully assessing whether transition strategies make sense. Such assessments, if measured against key transition factors, could improve a program’s potential for transition success. Transition responsibilities then fall almost exclusively on individual program managers, who are often not sufficiently trained to achieve the favorable transition outcomes they seek. Further, when the program manager’s tenure expires, the primary advocate for transitioning the program’s technology is also lost. This turnover increases the need for technical gains to be appropriately documented and disseminated so that user communities have visibility into potential solutions available to meet their emerging needs. An important part of this process is the tracking of transition outcomes, as we recommended DOD undertake for its technology transition programs in March 2013, and which we have also found lacking at DARPA. To improve technology transition planning and outcomes at DARPA, we recommend the Secretary of Defense direct the Director, DARPA, to take the following three actions: Oversee assessments of technology transition strategies for new and existing DARPA programs as part of existing milestone reviews used to assess scientific and technical progress to inform transition planning and program changes, as necessary. Our analysis identified four factors that could underpin these assessments, but the uniqueness of individual DARPA programs suggests that other considerations may also be warranted. Increase technology transition training requirements and offerings for DARPA program managers, leveraging existing DOD science and technology training curricula, as appropriate. Increase the dissemination of technical data on completed DARPA programs through Open Catalog and other government-sponsored information repositories aimed at facilitating commercialization of technologies. We provided a draft of this report to DOD for review and comment. In its written comments, which are included in appendix II, DOD partially agreed with our recommendations to oversee assessments of technology transition strategies for DARPA programs and to increase technology transition training requirements and offerings for DARPA program managers. In doing so, DOD agreed with most of the principles contained in our recommendations, but disagreed with the actions we recommended. DOD did not agree with our recommendation to increase the dissemination of technical data on completed DARPA programs. DOD also separately provided technical comments, which we incorporated, as appropriate. DOD agreed that assessments of technology transition strategies, which consider the four factors we identified for transition success, would help inform program decisions by DARPA leadership. However, DOD did not agree that such assessments be required at milestone reviews for DARPA programs, citing active participation by the Director, DARPA, in technology transition discussions throughout the life of a program. We agree that leadership is focused on technology transition and holds discussions often; however, we found it difficult to be able to identify transitions—or changes to transition strategies—that arise from these discussions. We believe that these discussions are an inadequate substitute for assessing technology transition strategies as part of the comprehensive program reviews that DARPA already undertakes. Assessing transition strategies at these reviews, as we recommended, would provide the opportunity to coordinate and prioritize transition goals, objectives, and planned actions in the context of scientific and technical developments in the program. By overseeing technology transition strategies separate from these reviews, the Director, DARPA, risks making decisions related to a program’s transition that are not appropriately informed by other important program considerations. Although DARPA asserted that our recommendation runs counter to its current efforts to improve processes and procedures, we found no evidence that processes and procedures were improving. DOD also agreed that technology transition training improves transition planning and outcomes, citing DOD science and technology training curricula as a “rich repository of transition insight.” Yet, despite the value it sees in its own training resources, DOD stated that DARPA program managers’ relatively short tenure leaves few opportunities to expose them to such “generic” training opportunities. Consequently, DOD did not agree that technology transition training requirements should be increased for DARPA program managers and stated that DARPA’s current approach of “tailored curricula focused on a program’s unique transition needs” remained appropriate. However, we did not find evidence of such tailored curricula in our review. Instead, we found that DARPA program managers all received the same limited training upon hiring, which was inadequate to consistently position programs for transition success. DOD also stated that DARPA continues to explore opportunities to offer tailored, concise, and streamlined training to its program managers. Therefore, we stand by our recommendation and continue to believe that expanded training opportunities are necessary for achieving better transition outcomes in DARPA programs, and we encourage DOD to capitalize on its existing investments in this area, to the extent possible. Further, DOD did not agree that increased dissemination of technical data on completed DARPA programs was warranted. DOD stated that using multiple information repositories “thins the DOD technology market by spreading it across several venues,” in turn reducing the likelihood that technology providers and potential transition partners will find a match. DOD also stated that it intends to make DTIC the central data storage for all DOD technical activities, including DARPA technologies, and views the use of multiple information repositories as unconducive to improving technology transition outcomes. In our review, we found DARPA’s existing reliance on DTIC limited the chances of the agency’s technologies being identified and selected by potential transition partners, particularly those outside of DOD. We fail to see how increased dissemination of technical data would actually “thin” the DOD technology market. To the contrary, it would allow more portals with which to gain access. Similarly, in 2013, the White House identified a government-wide need to broaden access to non-sensitive information on government- developed technologies, but improvements remain incomplete. Consequently, we continue to believe that DARPA should pursue dissemination of non-sensitive technical data through as many existing government-sponsored outlets as possible, including its own Open Catalog website and DOD TechLink, to improve the likelihood of transition successes in the agency’s programs. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Director, DARPA. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report covers the Defense Advanced Research Projects Agency’s (DARPA) (1) effectiveness at transitioning technologies since fiscal year 2010, including identifying the factors that contributed to successful technology transitions, and (2) implementation of Department of Defense (DOD) policies and programs intended to facilitate the transition of technologies. To assess DARPA’s effectiveness at transitioning technologies since fiscal year 2010, including identifying factors that contributed to successful transitions, we requested and reviewed portfolio-level data identifying the names, funding amounts, and technology transition of those DARPA programs successfully completing technology development during fiscal years 2010 through 2014. We confined our analysis to this time frame owing to availability of data from DARPA. These data included 150 programs funded under DOD’s budget activities for (1) applied research and (2) advanced technology development that DARPA identified as having completed as planned and producing a substantive technological gain or innovation, regardless of whether that technology or innovation transitioned to an end user. We used these data as the basis for selecting a simple random sample of 10 case study programs—5 that transitioned and 5 that did not transition. In conducting our case study analyses, we reviewed relevant program documentation to identify factors that facilitate transition success. While reviewing these case study programs, we identified inconsistencies between agency portfolio-level transition outcome data and program-level information. DARPA officials stated to us that this was due to the transition status of these programs changing after they had collected the portfolio-level data. As a result, we concluded that DARPA’s portfolio-level data were not sufficiently reliable for the purposes of assessing agency-wide transition rates and outcomes since fiscal year 2010. However, these inconsistencies did not significantly affect our program selections; therefore, these data were sufficiently reliable for the purposes of selecting the 10 case study programs. To identify factors that facilitated technology transition within the 10 selected programs, we analyzed DARPA provided documentation— including program briefings, memorandums of agreement, broad area announcements, budget documents, and program completion reports— for selected programs to identify factors that facilitated or precluded their individual transitions. We then conducted a content analysis of these individual factors to identify common themes among the programs, which led to us determining that four significant factors underpinned technology transition outcomes in the programs we reviewed. Once we identified these four factors, we developed a rating system to assess the extent to which each factor was present in each of our 10 programs, as supported through our analysis of program documentation. Our measures for each of the four factors were as follows: Military or commercial demand for the planned technology Fully present: Demand for the technology from a potential transition partner existed throughout the program, which would include (1) agreement between DARPA and a military service, DOD laboratory or other warfighter representatives that a related military capability gap or requirement exists; or (2) a private company identified a commercial demand for the technology or showed an interest in commercializing it. Partially present: Potential transition partners indicated to DARPA that they believed a demand existed for the technology, as is described above, although their interest was not consistent through the end of the program. Not present: The factor did not exist at all, and DARPA appears to have initiated the program without a potential transition partner agreeing that a capability gap or potential commercial use existed at any point during the program. Linkage to a research area where DARPA has sustained interest Fully present: In the years preceding the program’s initiation, at least two related DARPA or other DOD science and technology program had been completed. Partially present: In the years preceding the program’s initiation, at least one related DARPA or other science and technology program had been completed (this was the second DOD science and technology program of its kind). Not present: The factor did not exist at all, and this program appears to not have any roots in previous similar DARPA or other DOD science and technology programs. Active collaboration with potential transition partners Fully present: Potential transition partners consistently participated in, advised, or otherwise supported the program. Partially present: Potential transition partners participated in, advised, or otherwise supported the program, although their involvement was not consistent through the end of the program or was not present until after the prototype demonstration (relatively late in the program). Not present: The factor did not exist at all. The program appears to have lacked assistance from any potential transition partners, or their assistance was very infrequent or insignificant. Achievement of clearly defined technical goals Fully present: Measurable technical goals were set in the program and fully achieved to the satisfaction of DARPA and any transition partner involved in the program, to the extent that one or more had been identified for the technology. Partially present: Measurable technical goals were set in the program, but met with varying levels of success. The technical successes achieved, however, were sufficient to produce a technology responsive to the interests of a transition partner, to the extent that one or more had been identified for the technology. Not present: Measurable technical goals were either not set or sufficiently met in the program. The level of technical success was not sufficient to produce a technology responsive to the interests of a transition partner, to the extent that one or more had been identified for the technology. Using this rating system, two GAO analysts analyzed and coded whether each of the four factors was fully present, partially present, or not present in each of the 10 programs we reviewed. Each GAO analyst coded all the constituent items independently, and the two analysts then met to discuss and reconcile the differences between their codings. Following this initial round of coding, another GAO analyst independently verified the accuracy of the coding by reviewing the supporting program documentation. The final assessment reflected the analysts’ consensus based on the individual assessments. To assess DARPA’s implementation of DOD policies and programs intended to facilitate the transition of technologies, we identified and analyzed information sources including policy instructions, guidance, training materials, and technical data repositories intended to promote technology transition within DARPA, DOD and the federal government. We also reviewed previous federal directives issued by the Executive Office of the President that were related to technology transition at DARPA. We reviewed previous DARPA-sponsored reports on technology transition produced in previous years. We reviewed our prior related reports and program information regarding DOD’s technology transition programs and relevant DOD funding information. We reviewed available training, resources, and tools used by DARPA officials to help bring about technology transition. We reviewed the contents of DOD computer systems used to disseminate information on DARPA programs to potential transition partners. We reviewed our prior reports and DOD documentation on DOD transition programs, including the Small Business Innovation Research, Small Business Technology Transfer, and Joint Capability Technology Demonstration programs, among others, to understand the extent to which DARPA participates in these programs. We reviewed the extent to which DARPA uses DOD transition funds, and requested data regarding DARPA’s use of small business funds and its technology transition outcomes, although these data were unavailable for our analysis, as is discussed elsewhere in this report. We also reviewed historical information on DARPA’s use of DOD transition funds available from public sources, including DOD budget documentation. To gather additional information in support of our review for both objectives, we conducted interviews with current and former officials responsible for executing, managing, and overseeing transition of DARPA-developed technologies, including representatives of DARPA’s senior leadership and Adaptive Execution Office, program management offices and selected program managers, military services liaisons, and small business program officials. We also interviewed officials from the Office of the Assistant Secretary of Defense for Research and Engineering and DOD’s Office of Small Business Programs. Further, we interviewed officials from selected military requirements and acquisition offices, including the Joint Staff’s Force Structure, Resource, and Assessment directorate; Office of the Deputy Chief of Staff of the Army for Operation, Plans and Training; Office of the Deputy Chief of Staff of the Army for Logistics; Office of the Assistant Secretary of the Army for Acquisition, Logistics, and Technology; Army Program Executive Offices for Intelligence Electronic Warfare and Sensors and Command, Control, Communications—Tactical; and Marine Corps Systems Command. We also met with staff from selected DOD research centers, including the Air Force Research Laboratory and the Office of Naval Research, and with the Director of Science and Technology curriculum at the Defense Acquisition University. We conducted this performance audit from January 2015 to November 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Diana Moldafsky, Assistant Director; Christopher R. Durbin, Analyst in Charge; Emily Bond; Nathan Foster; Aaron M. Greenberg; John Krump; Jean L. McSween; Sean Seales; and Roxanna T. Sun made key contributions to this report.
After the Soviet Union launched the first satellite into orbit in 1957, the U.S. government made a commitment to initiate, rather than react to, strategic technological surprises. DOD relies on DARPA's disruptive innovations to maintain this promise, backed by congressional appropriations of over $2.9 billion in fiscal year 2015 alone. In April 2015, DOD reported that U.S. technological superiority is again being challenged by potential adversaries and renewed efforts to improve its products. Meanwhile, GAO found deficiencies in DOD's technology transition processes that may hinder these efforts and DARPA's goals. Senate Report 113-176 included a provision for GAO to review DOD's technology transition processes, practices, and results. This report focuses on DARPA and assesses its (1) effectiveness at transitioning technologies since fiscal year 2010, including identifying factors that contribute to successful transitions, and (2) implementation of DOD policies and programs intended to facilitate technology transition. GAO reviewed DARPA programs completed since 2010; identified transition factors by analyzing program documentation for a random sample of 10 cases; reviewed DOD policies; and interviewed DOD officials. Since 2010, the Defense Advanced Research Projects Agency (DARPA) has had success in technology transition—the process of migrating new technologies from the research environment to military users, including Department of Defense (DOD) acquisition programs and warfighters. However, inconsistencies in how the agency defines and assesses its transition outcomes preclude GAO from reliably reporting on transition performance across DARPA's portfolio of 150 programs that were successfully completed between fiscal years 2010 and 2014. These inconsistencies are due in part to shortfalls in agency processes for tracking technology transition. Nevertheless, GAO's analysis of 10 selected programs identified four factors that contributed to transition success, the most important being military or commercial demand for the planned technology and linkage to a research area where DARPA has sustained interest. Both of these factors were generally evident at the time a program started, while the other two factors were observed later, once the program was underway. The figure below highlights the four factors. DARPA's implementation of DOD programs intended to foster technology transition has been limited and neither DOD nor DARPA have defined policies for managing transition activities. DARPA has also largely elected not to participate in DOD technology transition programs, with the exception of federally mandated small business programs, citing challenges in meeting program requirements within DARPA's typical three- to five-year timeframe for executing its research initiatives. Instead, DARPA primarily focuses its time and resources on creating radically innovative technologies that support DOD's warfighting mission and relegates technology transition to a secondary priority. DARPA leadership defers to its program managers to foster technology transition, but provides limited related training. Moreover, while its leadership conducts oversight of program managers' activities through periodic program reviews, these reviews do not regularly assess technology transition strategies. GAO has found that this approach does not consistently position programs for transition success. Further, while DARPA disseminates information on its past programs within DOD, to the public, and among private companies, it does not take full advantage of government-sponsored resources for sharing technical data, which may obscure visibility into its programs and lead to missed transition opportunities. DARPA should regularly assess technology transition strategies, refine training requirements, and increase dissemination of technical data for completed programs. DOD did not agree to take GAO's recommended actions, which remain warranted, as discussed in the report.
Decommissioning begins when a licensee has filed documentation with NRC to permanently shut down a reactor and the fuel has been removed. NRC requires decommissioning to be completed within 60 years after a reactor permanently shuts down unless additional time is necessary to protect public health and safety. Licensees choose from two decommissioning methods: immediate decontamination and dismantlement (DECON) or safe storage (SAFSTOR). The DECON method calls for the licensee to remove the radioactively contaminated equipment, structures, and parts of the reactor for shipment to a low-level radioactive waste disposal site or for temporary storage. This process generally takes 5 or more years. Under the SAFSTOR method, the reactor is left for up to 60 years in a state that allows the radioactive components to decay while the reactor is maintained and monitored. Once radioactivity has decreased, the reactor is then dismantled in a way similar to the DECON process. After all of the radioactive material has been removed, and NRC has terminated the reactor’s license, the site can be used for other purposes. Licensees can begin decommissioning a reactor while another reactor at the site is operating. Currently, 36 nuclear power plants have more than one reactor at the site, and six of those plants have one reactor that is in the process of decommissioning. In addition to decommissioning, licensees are also responsible for other postshutdown activities. These activities include the management of spent nuclear fuel—a type of high-level radioactive waste—until it can be transferred to the Department of Energy, which is responsible for providing permanent disposal. Site restoration is another such activity, which includes the cleanup of nonradiological contaminants, such as acids and heavy metals, to restore the power plant site to a condition that is safe for public use. However, these activities do not fall within the scope of NRC’s definition of decommissioning or under NRC’s decommissioning oversight authority, and licensees must pay for these costs with funds that are separate from their decommissioning funds. NRC periodically reviews licensees’ decommissioning funds and related licensee data to determine if licensees have provided reasonable assurance that they will accumulate adequate funds for decommissioning. According to NRC guidance, the amount of funds that is considered adequate is established by NRC’s decommissioning formula, which represents the bulk of the funds needed to decommission a specific reactor and is not an estimate of the actual cost. The formula estimates decommissioning costs by reactor type—pressurized water reactor or boiling water reactor—and the reactor’s capacity to generate electricity. The formula is based on two studies, published in 1978 and 1980, that provided information on the technology available at the time, safety considerations, and the probable costs for decommissioning the two types of reactors. NRC codified its decommissioning funding formula in 1988. According to this regulation, the three cost factors identified in the formula—labor, energy, and low-level radioactive waste disposal—are adjusted annually to reflect the effects of inflation. To estimate costs in current year dollars, the labor and energy cost factors are adjusted from the prior year using data from the U.S. Department of Labor’s Bureau of Labor Statistics, while the waste disposal cost factor is adjusted based on actual disposal cost data published by NRC. As part of NRC’s oversight of decommissioning funds, the agency requires licensees to provide decommissioning cost estimates and other information to NRC throughout the life cycle of a nuclear reactor: Initial decommissioning estimate and financial method. Beginning in July 1990, NRC has required licensees to report that they had (1) estimated the amount needed for decommissioning, typically using NRC’s decommissioning funding formula, and (2) developed a plan for accumulating these funds by the projected time of permanent shutdown. Since that date, license applicants have been required to submit this information as part of their license application. NRC regulations allow licensees to use one or more methods as part of their plan to accumulate funds, such as prepayment of the entire estimated decommissioning amount, a trust fund that is separate from other licensee assets and accrues earnings based on investments, parent company guarantees, or letters of credit. The most common financial method is a trust fund that is allowed to grow over the life of the reactor and during the decommissioning process. Once licensees contribute funds to a decommissioning trust fund, funds generally cannot be withdrawn for other purposes.  DFS reports. NRC requires licensees to submit DFS reports at least every 2 years while a reactor is operating, and every year once a reactor is within 5 years of permanent shutdown through license termination. Licensees may report the amount of funds estimated to be needed for decommissioning using the decommissioning funding formula or a licensee-generated site-specific cost estimate if it is greater than the formula amount. According to NRC guidance, NRC staff compare two things in reviewing these reports: (1) the licensee’s accumulated funds plus amounts provided by any other methods in the licensee’s plans to accumulate funds as described above and (2) the amount estimated to be needed for decommissioning, which is the greater of an NRC-generated formula estimate or the licensee- generated site specific cost estimate. If the licensee’s balance is greater than or equal to the estimated amount needed for decommissioning, an NRC reviewer makes a determination of reasonable assurance. If the balance is less than the estimated amount needed for decommissioning, the reviewer projects the licensee’s accumulated funds through the decommissioning period to account for any anticipated growth. If the projected amount plus amounts provided by other methods is less than the estimated amount needed for decommissioning and a second reviewer verifies this finding, then NRC may request additional information from the licensee and repeat the process. According to agency guidance, licensees are expected to make adjustments to correct shortfalls in 2 or 5 years, depending on the type of licensee, from when the DFS report in question is submitted. An NRC official told us that the agency determines on a case-by-case basis if additional actions should be taken to assure the agency that the licensee will have adequate decommissioning funds when needed.  Preliminary decommissioning cost estimate. About 5 years prior to a reactor’s projected permanent shutdown, NRC requires licensees to submit a preliminary decommissioning cost estimate that is more detailed than NRC’s decommissioning funding formula. This cost estimate provides NRC with an up-to-date estimate of expected decommissioning costs and an assessment of the major factors that could affect such costs, as well as the licensee’s plans for adjusting decommissioning funding levels if necessary. Major factors include, but are not limited to, the potential for contamination of the site and the decommissioning method the licensee plans to use. NRC guidance calls for staff to compare the preliminary cost estimate with the decommissioning cost estimate generated by the NRC formula. The licensee’s preliminary cost estimate is deemed acceptable if it is equal to or greater than the formula amount. If it is less than the formula amount, NRC informs the licensee that additional information is needed to assure the agency that the licensee will accumulate adequate funds for decommissioning.  Site-specific cost estimate. NRC requires licensees to submit a site- specific cost estimate prior to or within 2 years following permanent shutdown; licensees may also develop such estimates earlier at their discretion. The intent of this cost estimate is to provide NRC with a more detailed assessment that incorporates the cost impacts of site- specific factors. Site-specific factors include, but are not limited to, an estimate of the volume of radioactive waste and a summary of costs estimated for each major decommissioning activity. According to NRC guidance, the site-specific estimate may be significantly greater than the minimum amount based on the NRC formula. If the site-specific estimate and formula amount differ, NRC requires licensees to provide information on the basis for the difference. If NRC determines that the information provided is insufficient, an agency official told us that the agency decides, on a case-by-case basis, how many information requests it will make and whether it will consider taking additional actions to assure the agency that the licensee will have adequate decommissioning funds when needed.  License termination plan with updated site-specific cost estimate. Toward the end of decommissioning and at least 2 years before termination of the reactor’s license, NRC requires licensees to submit a license termination plan. In this plan, licensees must estimate the remaining costs of decommissioning. NRC guidance calls for agency staff to review this report to independently verify that a reactor can be decommissioned safely and the license terminated. As part of this review, staff are to compare the estimated remaining costs of decommissioning with the licensee’s funds available for decommissioning. If the available decommissioning funds are less than the estimated remaining costs, the plan must indicate the means the licensee will use for ensuring adequate funds to complete decommissioning. Licensees who choose to invest their decommissioning trust funds are generally required to do so in accordance with standards set by NRC. NRC defers to the Federal Energy Regulatory Commission (FERC) for investment standards for reactors that are owned by public utilities, which constitute about half of the 104 operating reactors. FERC requires the utilities it regulates to invest their decommissioning funds in accordance with several standards. These standards state, among other things, that the fund must be independent of the public utility, its subsidiaries, affiliates, or associates; the public utility may not serve as its own investment fund manager; and the investment manager must exercise the standard of care that a prudent investor would use in the same circumstances. Public utilities are required to submit annual decommissioning fund statements to FERC that summarize the public utility decommissioning fund balances and investments, among other things. For reactors that are not owned by public utilities, NRC regulations set investment standards specifying, for example, that the funds must be held by an independent trustee who adheres to a standard of care required by state or federal law or, in the absence of any such standard, to a prudent investor standard as defined by FERC; investments may not be made in any reactor licensee or in a mutual fund in which 50 percent or more of the fund is invested in the nuclear power industry; and no more than 10 percent of the funds can be indirectly invested in securities of any entity owning or operating a reactor. In response, in part, to GAO’s and the NRC OIG’s recommendations, NRC has taken actions to strengthen its oversight of licensees’ decommissioning funds, including creating guidance for reviewing DFS reports, reevaluating the decommissioning funding formula, and requiring licensees currently decommissioning their reactors to report to NRC the actual costs of decommissioning. However, remaining weaknesses in NRC’s oversight may limit the agency’s ability to ensure that licensees have provided reasonable assurance that they will have adequate funds to decommission their reactors. NRC has taken steps to identify and resolve decommissioning funding shortfalls by creating guidance and other documentation related to criteria for reviewing DFS reports and by using its enforcement process when deficiencies are identified. In 2003, we recommended that NRC establish criteria for taking action when it determines that a licensee is not accumulating sufficient funds. Since then, NRC has developed guidance for reviewing DFS reports that includes criteria for when staff should request additional information from licensees to address shortfalls. NRC has updated this guidance several times based on lessons learned from its DFS report reviews. NRC also documented the approach staff are to use to request additional information from licensees when the agency identified decommissioning shortfalls in 2009 through its DFS reviews. In addition, NRC has used its enforcement process in three cases to address DFS reporting deficiencies since 2009. Agency officials said that such actions were effective in getting the licensees to resolve the issues identified, in part because NRC’s enforcement process provides publicly available information in the event that an apparent violation is identified. In addition, in response to an NRC OIG recommendation, NRC has conducted reviews at licensee offices to verify that the amounts licensees reported to NRC in DFS reports as fund balances match the amounts stated in licensees’ year-end bank statements. The NRC OIG recommended in 2006 that the agency require verification of decommissioning fund balances in order to better ensure that licensees are providing reasonable assurance that they will have the necessary funds. NRC documents indicate that from April 2008 through October 2010, NRC officials performed 136 reviews at 35 locations. NRC officials told us that during these reviews they verified that the decommissioning fund balances reported in the bank statements matched the balances reported in the DFS reports, with one exception, and that they did not find any cases where a licensee overreported its fund balance. Furthermore, in response to an NRC OIG recommendation, NRC began reevaluating its decommissioning funding formula in 2009 to determine if it should be updated because of changes in decommissioning technology and the cost of management and disposal of low-level radioactive waste. The NRC OIG recommended in 2000 that the agency consider reassessing the reasonableness of the formula, in part because it was outdated, and reiterated this recommendation in 2006. NRC has not updated its decommissioning funding formula since it was codified in 1988. NRC officials told us that they plan to make a recommendation to agency management in late 2012 about whether an update is warranted based on its evaluation. In commenting on a draft of this report, NRC officials told us that, as part of evaluating the formula, they expect to estimate the lower and upper bounds of the cost of decommissioning based on licensee- generated cost estimates and historical decommissioning costs—thereby creating a range of expected decommissioning costs—and then see how an updated formula fits into this range. Moreover, NRC amended its decommissioning funding regulations in June 2011 to improve decommissioning planning and reduce the likelihood that any currently operating power plant will become a legacy site—a facility with a licensee that cannot complete complex decommissioning work for technical or financial reasons. The regulatory changes as a result of the amendments will, among other things, require licensees of the reactors currently undergoing decommissioning to report to NRC the actual costs being incurred during decommissioning, specifically, to report annual decommissioning expenditures. NRC wants these data to assess the adequacy of decommissioning funding after permanent shutdown. These data could be used to determine if the agency’s decommissioning formula estimates the bulk of the funds that licensees will likely need to decommission their reactors. The amendments become effective in December 2012, and licensee reporting of these data is required by March 31, 2013. Even with the actions NRC took to strengthen its oversight, the agency’s ability to ensure that licensees provide reasonable assurance that they will have adequate funds at the time of decommissioning may be limited by several remaining weaknesses in its oversight. Specifically, NRC has not (1) clearly defined what the agency means by the “bulk” of the funds licensees will likely need to decommission and the decommissioning funding formula may not reliably estimate adequate decommissioning costs, (2) always clearly or consistently documented its fund balance review results and may discontinue these reviews, and (3) reviewed licensees’ compliance with investment standards. NRC has not defined what it means by the bulk of the funds licensees will likely need to decommission a reactor. When we compared decommissioning funding formula estimates provided by NRC for 12 reactors with licensees’ site-specific cost estimates calculated for the same reactors, we found that the NRC formula captured from 57 to 103 percent of the costs reflected in each reactor’s site-specific estimate, with 5 of the 12 capturing 76 percent or less (see table 1). Even though the formula estimates captured more than 50 percent of the licensee’s site- specific cost estimates for each of the 12 reactors, the wide range of differences between formula and site-specific cost estimates raises a question about whether or not the formula can reasonably be said to have captured the bulk of decommissioning costs. In addition, for 8 of the 12 reactors, the licensees calculated their site- specific cost estimates less than 7 years before the license was originally due to expire, and their estimates were as much as $362 million more than the formula estimates at that time. It is true that NRC expects that its formula estimate may be less than licensees’ site-specific cost estimates. However, licensees whose formula estimate is significantly less than the site-specific estimate when calculated near the end of their reactors’ operating lives would have fewer years to accumulate a significant amount of decommissioning funds. Overall, 9 of the 12 reactors have had their licenses renewed, which gives these licensees more time to accumulate the decommissioning funds they will likely need. However, without changes to the NRC formula, it is possible that the NRC formula estimates could be significantly less than the licensees’ site- specific cost estimates several years from their new shutdown date. Furthermore, NRC’s decommissioning funding formula may not provide a reliable estimate of adequate decommissioning costs for several reasons. We compared NRC’s formula and the process the agency used to create the formula with GAO’s cost-estimating guide, which compiles cost- estimating best practices drawn from across industry and government and, in doing so, identified several issues that raise additional questions about the quality of the formula. For example, NRC’s decommissioning funding formula substantially met two characteristics of a high-quality formula, but only partially met the other two. Specifically, NRC’s supporting documentation for the formula was not thorough enough for us to understand and replicate its derivation. According to our cost- estimating guide, without thorough documentation, NRC cannot reliably explain its rationale for the cost elements that support the formula and formula-generated cost estimates. In addition, NRC did not perform a risk analysis on the formula, which would convey a level of confidence in the likelihood of the formula’s ability to estimate the most likely minimum cost of decommissioning. Without performing a risk analysis on the formula, NRC cannot be assured of the accuracy of the formula because management may not be able to determine a defensible level of contingency reserves that is necessary to cover increased costs such as underestimated labor and waste disposal costs. See appendix II for our detailed assessment of the formula in comparison with the four characteristics identified in our cost-estimating guide. The results of more than one-third of the 136 fund balance reviews that NRC staff performed from April 2008 to October 2010 to verify the amounts in DFS reports were not always clearly or consistently documented. Specifically, the results of 49 reviews were not clear because the reviewer either did not check “yes” or “no” or checked both boxes on the one-page form NRC staff used to collect information when indicating whether the original licensee documents were verified to show that the amounts in year-end bank statements matched the amounts in DFS reports (see fig. 1). In other cases, the results were not consistently documented, with some reviewers providing general information on their forms, such as writing “no problem,” while others provided more detailed information, such as providing both the balance in the year-end bank statement and in the DFS report. As of October 2011, NRC did not have written procedures describing the steps that staff should take in analyzing licensee documentation and documenting review results on the one-page form, which likely contributed to NRC staff not always documenting the results of the reviews clearly or consistently. We have previously reported that written procedures help ensure consistency within an organization. Under Standards for Internal Control in the Federal Government, federal agencies are to clearly document internal control—the policies, procedures, techniques, and mechanisms that enforce management’s directives—and the documentation is to be readily available for examination. In addition, NRC officials told us that management was considering recommending that the agency discontinue the reviews. If NRC discontinues these reviews, the agency will no longer have a mechanism for verifying the accuracy of licensee fund balances in their DFS reports and will no longer address the 2006 NRC OIG recommendation to verify licensee balances to better ensure that licensees are providing reasonable assurance that they will have the necessary funds for decommissioning. NRC officials told us that the reasons they may discontinue the reviews are a lack of findings and budget constraints. However, according to our analysis of the results of the 136 reviews, it is unclear whether NRC’s conclusion of a lack of findings is accurate. In addition, an NRC official told us that these reviews could be incorporated into the DFS review process, thereby eliminating the cost of travel to a licensee’s office, potentially mitigating budget constraint concerns. NRC has not reviewed licensees’ compliance with the investment standards the agency has set for decommissioning funds. NRC does not require licensees to file statements showing how their decommissioning funds are invested, and NRC’s DFS review process does not include an evaluation to ensure that licensees comply with these investment standards. As a result, NRC cannot confirm that licensees are avoiding conditions described in the standards, such as investing in other licensees. According to two stakeholders involved in decommissioning fund management and investment consulting, a small but growing number of licensees are considering investing in hedge funds as a way of improving returns on their investments and managing market volatility. As we have stated in the past, hedge funds pose a number of risks and challenges beyond those posed by traditional investments. NRC officials told us that their staff resources are limited and that they lack the financial expertise to evaluate compliance with investment restrictions. For public utility licensees, NRC officials stated that they coordinate informally with FERC in cases where potential funding shortfalls or problems arise. FERC officials told us that they review licensee compliance with the standards only if a problem with a licensee’s decommissioning trust fund is brought to the agency’s attention, which would mean that most licensees’ compliance with the standards would not be reviewed. Without awareness of the nature of licensees’ investments, NRC cannot determine whether it needs to take action to enforce the standards. NRC ensures that licensees have provided reasonable assurance that they will have adequate funds to decommission their reactors by periodically reviewing licensees’ decommissioning funds and related licensee data. Consistent with its mission to protect the public and environment from the effects of radiation, NRC has taken steps to strengthen its oversight of licensees’ decommissioning trust funds. NRC, for example, amended its decommissioning funding regulations to improve decommissioning planning and reduce the likelihood that any currently operating power plant will become a legacy site. In addition, NRC began reevaluating its decommissioning funding formula in 2009 to determine if it should be updated because of changes in decommissioning technology and the cost and management of low-level radioactive waste. NRC officials plan to make a recommendation to management in late 2012 about whether an update is warranted based on this evaluation. However, weaknesses remain in NRC’s oversight of decommissioning funds that could leave the public and environment vulnerable. For example, NRC has not defined what it means by the bulk of funds that the decommissioning funding formula is supposed to estimate, and we found a wide-range of differences between NRC’s decommissioning funding formula estimates and some licensees’ site-specific cost estimates. This raises questions about the reliability of the formula as an estimate of the minimum amount needed for decommissioning. In addition, the agency did not have thorough documentation that would enable us to understand and replicate the derivation of its formula and did not perform a risk analysis on the formula, raising questions about the quality of the cost estimates used to create the decommissioning formula. Without a definition of what NRC means by the bulk of decommissioning costs and without high-quality estimates of these costs, it is unclear how NRC can determine if the formula is performing as intended or that licensees will have adequate decommissioning funds when necessary. In addition, NRC does not have written procedures describing the steps that staff should take in their reviews analyzing licensee documentation and verifying that the amounts licensees report to NRC in their DFS reports match the amounts reported on their year-end bank statements, a fact that likely contributed to the results of the reviews not always being clearly or consistently documented. However, NRC may discontinue these reviews, which the agency undertook in response to a 2006 NRC OIG recommendation. Without conducting these reviews, NRC will not have an accountability mechanism for ensuring that the amounts reported in DFS reports match the amounts shown in licensees’ year-end bank statements. Finally, NRC has not reviewed licensees’ compliance with the investment standards it has set for decommissioning funds. Therefore, the agency cannot confirm that licensees are avoiding conditions described in the standards that could put decommissioning funds at risk. Without awareness of the nature of licensees’ investments, NRC cannot determine whether it needs to take action to enforce decommissioning investment standards. To further strengthen NRC’s oversight of decommissioning funding assurance, we recommend that the NRC Commissioners take the following five actions:  Ensure reliability as part of the agency’s process of reevaluating its decommissioning funding formula, by  defining what the agency means by the “bulk” of the funds that licensees will likely need to decommission their reactors and  using the cost-estimating characteristics as a guide for a high- quality cost-estimating formula in the event that NRC chooses to update the formula.  Better ensure that licensees are providing reasonable assurance that they will have the necessary funds and improve the consistency of information the agency collects by  documenting procedures describing the steps that staff should take in their reviews analyzing licensee documentation and verifying that the amounts licensees report to NRC in their DFS reports match the balances on their year-end bank statements and continuing these reviews of fund balances in a way that is most efficient and effective for the agency.  Consider reviewing a sample of licensees’ investments to determine if licensees are complying with decommissioning investment standards and determine whether action should be taken to enforce these standards. We provided a draft of this report to NRC for review and comment. NRC provided written comments, which are presented in appendix III, and technical comments, which we incorporated in the report as appropriate. NRC agreed with three of our recommendations, disagreed with one recommendation, and partially agreed with another recommendation. Specifically, NRC agreed with our recommendations that the agency (1) document procedures describing the steps that staff should take in their reviews analyzing licensee documentation and verifying that the amounts licensees report to NRC in their DFS reports match the balances on their year-end bank statements; (2) continue these reviews of fund balances in a way that is most efficient and effective for the agency; and (3) consider reviewing a sample of licensees’ investments to determine if licensees are complying with decommissioning investment standards and determine whether action should be taken to enforce these standards. However, NRC disagreed with our recommendation that, when the agency reevaluates its decommissioning funding formula, it define what it means by the “bulk” of the funds that licensees will likely need to decommission their reactors. In its comments, NRC stated that, in view of the comprehensiveness of the agency’s regulatory system, a precise definition of the meaning of “bulk” is not necessary to ensure that licensees adequately plan for decommissioning costs. We did not recommend that NRC provide a precise definition but we continue to believe that a definition is necessary. As we noted in our draft report, without a definition of what the agency means by bulk it is unclear how NRC can determine if the formula is performing as intended or if licensees will have adequate decommissioning funds when necessary, especially given the wide range of differences we identified when we compared formula-based and site-specific cost estimates. NRC suggested that we revise our recommendation to state that NRC’s reevaluation of the formula consider the relationship between the formula amount and the range of expected decommissioning costs. This approach could be appropriate, as long as NRC states what the relationship between the formula and the range should be. According to NRC officials, the agency has not yet developed this range of expected decommissioning costs. Officials explained that, as part of its process of reevaluating the formula, the agency expects to estimate the lower and upper bounds of the range of expected decommissioning costs based on licensee-generated cost estimates and historical decommissioning costs and will determine how an updated decommissioning funding formula fits into this range. We believe such an analysis could help the agency better define the bulk of funds licensees should accumulate to ensure adequate funds for decommissioning. In response to this comment, we modified the report to include information about the range of expected decommissioning costs NRC plans to develop, but did not revise the recommendation. Finally, NRC partially agreed with our recommendation that the agency use the cost-estimating characteristics as a guide for a high-quality cost- estimating formula in the event that NRC chooses to update the formula as part of ensuring reliability during the process of evaluating its decommissioning funding formula. NRC agreed that the decommissioning funding formula should provide a credible and well-documented basis for establishing the minimum amount of funding needed to plan for the costs of decommissioning a reactor, but disagreed that the formula is the appropriate tool for achieving the characteristics of comprehensiveness and accuracy in estimating decommissioning costs. NRC commented that the formula was not intended to provide a cost estimate but rather provide a reference level for licensees as a planning tool early in a reactor’s life. We disagree that the formula is not a cost estimate. As we noted in our draft report, NRC considers the formula to be the minimum amount needed by licensees to decommission their reactors; we believe that this meets the definition of a cost estimate. NRC further commented that the agency believes that it achieves the characteristics of comprehensiveness and accuracy by requiring a licensee to provide an updated, plant-specific cost estimate late in a plant’s life. We recognize that the plant-specific cost estimate that NRC requires can draw on additional information to help achieve characteristics of a high-quality cost estimating formula. However, this requirement does not address the quality of the formula. The formula needs to be appropriately accurate and comprehensive for its intended purpose. As we noted in our draft report, licensees typically use the formula to meet NRC’s requirement to report an initial decommissioning cost estimate in their license application, and NRC uses the formula to determine if there is reasonable assurance that licensees will have adequate decommissioning funds as part of the DFS report review process. We recognize that NRC is in the process of reevaluating its more than 30-year old formula to determine if the formula should be updated to reflect changes in decommissioning technology and costs. We believe that an updated formula that reflects these changes and has the characteristics of a high-quality cost-estimating formula could help to ensure that NRC’s decommissioning funding formula is appropriately accurate and comprehensive. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chairman of NRC, appropriate congressional committees, and other interested parties. The report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix IV. To describe how the Nuclear Regulatory Commission (NRC) ensures that reactor owners (licensees) provide reasonable assurance of adequate decommissioning funds, we reviewed relevant regulations, including Reporting and Record Keeping for Decommissioning Planning, and guidance documents, such as Procedures for NRC’s Independent Analysis of Decommissioning Funding Assurance for Operating Nuclear Power Reactors. We also reviewed GAO and NRC Office of the Inspector General (OIG) reports on decommissioning funding assurance and interviewed NRC officials from the Office of Nuclear Reactor Regulation and OIG to better understand the agency’s oversight of decommissioning funds. To identify any improvements or weaknesses in NRC’s oversight of decommissioning funding assurance, we analyzed NRC’s decommissioning funding formula and the agency’s reviews of licensee decommissioning funding status (DFS) reports. To analyze NRC’s decommissioning funding formula, we compared NRC formula-generated cost estimates with licensee- generated site-specific cost estimates for 12 nuclear reactors for which we were able to obtain both types of estimates that were calculated in the same year. We also compared NRC’s formula and the process the agency used to develop the formula with GAO-identified best practices for cost estimating, and reviewed documents used to create the formula. To ensure our understanding of how the formula was developed and how it is used, we interviewed NRC officials and staff of the Pacific Northwest National Laboratory (the contractor NRC used to create the formula). To analyze NRC’s reviews of licensee DFS reports, we analyzed data from reactor licensees’ 2011 DFS reports for each of the operating reactors and for currently decommissioning reactors. These reports reflect estimated decommissioning costs and actual decommissioning fund balances as of December 31, 2010, among other things. We assessed the reliability of the data we used by interviewing NRC officials to identify steps the agency uses to verify the data, and several licensees to identify steps they take to ensure that the data they provide are reliable. In our assessment of the data, we determined these data were sufficiently reliable for our purpose of identifying the number of licensees who had not reported specific data in the 2011 DFS reports. We also reviewed the results of NRC’s in-licensee-office comparisons of licensees’ DFS reports and year-end bank statements from April 2008 through October 2010. We also analyzed relevant Federal Energy Regulatory Commission (FERC) regulations governing decommissioning trust funds, because FERC oversees public utility financial reporting and about half of the 104 operating reactors are owned by public utilities. To better understand issues related to decommissioning nuclear power reactors in general, we interviewed officials from other federal agencies (such as from FERC and the Department of Energy), a decommissioning cost estimator, nongovernmental organizations, nuclear power industry groups, licensees of nuclear power reactors, and decommissioning fund stakeholders—a fund trustee and two investment advisors—who have knowledge of nuclear reactor decommissioning or are involved with it. We identified the trustee through licensee interviews and one investment advisor through a March 2011 NRC public decommissioning workshop that we attended. We also attended the 23rd annual NRC Regulatory Information Conference held in March 2011. In addition, we visited five nuclear power plants—Haddam Neck (Connecticut Yankee) in Connecticut, Indian Point in New York, Peach Bottom Atomic Power Station and Three Mile Island Nuclear Station in Pennsylvania, and Enrico Fermi Atomic Power Plant in Michigan—interviewed licensee officials there, and toured the facilities. The five sites we visited were a nonprobability sample that we selected to include a mix of fully decommissioned, currently decommissioning, and operating reactors. Because we used a nonprobability sample, the information obtained from these site visits is not generalizable to other reactors. To select these sites, we considered sites that were a mixture of types of reactors, types of ownership, and types of decommissioning methods used, as well as reactors that are operating, currently decommissioning, or fully decommissioned. In addition to these criteria, we considered sites that were close to GAO headquarters in Washington, D.C., for cost-saving purposes. The exception was the Enrico Fermi Atomic Power Plant in Michigan. We visited this site because it has the closest currently decommissioning reactor using the immediate decontamination and dismantlement (DECON) method. We also interviewed relevant state agency officials (e.g., the Pennsylvania Public Utility Commission and Michigan Department of Environmental Quality) in the states where we conducted our site visits to better understand their roles in the decommissioning process. We conducted this performance audit from February 2011 to April 2012, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 2 shows our comparison of NRC’s decommissioning funding formula compared with our cost-estimating guide’s four characteristics of a high-quality cost-estimating formula. In addition to the individual named above, Karen Jones, Assistant Director; Karen Richey, Assistant Director; Josey Ballenger; Bernice Dawson; Jennifer Echard; Jonathan Kucskar; Robin Marion; Katya Melkote; Cynthia Norris; Michelle K. Treistman; and Vanessa V. Welker made significant contributions to this report. Mehrzad Nadji and Anne Rhodes-Kline also made important contributions to this report.
About 20 percent of U.S. electricity is generated by 104 nuclear reactors. NRC, which regulates reactors, requires their owners (licensees) to reduce radioactive contamination after reactors permanently shut down. This process, called decommissioning, costs hundreds of millions of dollars per reactor. NRC requires licensees to provide reasonable assurance that they will have adequate funds to decommission, in part, by accumulating funds that are greater than or equal to NRC’s decommissioning funding formula. GAO and NRC’s OIG have identified concerns about NRC’s oversight of decommissioning funds. GAO was asked by Representative Markey in his former capacity as Chairman of the House Subcommittee on Energy and Environment to (1) describe how NRC ensures that licensees provide reasonable assurance of adequate decommissioning funds and (2) identify any improvements or weaknesses in NRC’s oversight of this area. GAO analyzed NRC’s formula and reviews of licensee information and interviewed NRC officials, licensees, and others. The Nuclear Regulatory Commission (NRC) periodically reviews licensees’ decommissioning funds and related licensee data to determine if licensees have provided reasonable assurance that they will accumulate adequate funds for decommissioning. For example, licensees must submit estimates to NRC of decommissioning costs throughout the life of the reactor and submit fund status reports at least every 2 years while the reactor is operating. Licensees typically accumulate such funds over time through trust fund investments. The minimum amount of funds considered adequate is established by NRC’s decommissioning funding formula, which is based on information collected more than 30 years ago. NRC has taken actions to strengthen its oversight of licensees’ decommissioning funds by (1) creating guidance and other documents related to criteria for reviewing licensees’ 2-year reports and by using its enforcement process when deficiencies are identified, (2) conducting reviews at licensee offices to verify that fund balances licensees reported in their 2-year reports match their year-end bank statements in response to a 2006 NRC Office of the Inspector General (OIG) recommendation, (3) reevaluating the decommissioning funding formula to determine if it should be updated, and (4) improving decommissioning planning. However, several weaknesses may limit NRC’s ability to ensure that licensees have provided reasonable assurance. Specifically: NRC’s formula may not reliably estimate adequate decommissioning costs. According to NRC, the formula was intended to estimate the “bulk” of the decommissioning funds needed, but the term “bulk” is undefined, making it unclear how NRC can determine if the formula is performing as intended. In addition, GAO compared NRC’s formula estimates for 12 reactors with these reactors’ more detailed site-specific cost estimates calculated for the same period. GAO found that for 5 of the 12 reactors, the NRC formula captured 57 to 76 percent of the costs reflected in each reactor’s site-specific estimate; the other 7 captured 84 to 103 percent. The results of more than one-third of the fund balance reviews that NRC staff performed from April 2008 to October 2010 to verify that the amounts in the 2-year reports match year-end bank statements were not always clearly or consistently documented. As an example of inconsistent results, some reviewers provided general information, such as “no problem,” while others provided more detail about both the balance in the year-end bank statement and the 2-year report. As of October 2011, NRC did not have written procedures describing the steps that staff should take for conducting these reviews, which likely contributed to NRC staff not always documenting the results of the reviews clearly or consistently. NRC has not reviewed licensees’ compliance with the investment standards the agency has set for decommissioning trust funds. These standards specify, among other things, that fund investments may not be made in any reactor licensee or in a mutual fund in which 50 percent or more of the fund is invested in the nuclear power industry. As a result, NRC cannot confirm that licensees are avoiding conditions described in the standards that may impair fund growth. Without awareness of the nature of licensees’ investments, NRC cannot determine whether it needs to take action to enforce the standards. GAO recommends, among other things, that NRC define what it means by the “bulk” of the funds needed for decommissioning and consider reviewing a sample of licensees’ investments to determine if they comply with standards. NRC agreed to consider reviewing a sample of investments, but disagreed that defining bulk is needed because of the comprehensiveness of NRC’s regulatory system. GAO continues to believe that this definition is needed.
The DHP budget estimates submitted to the Congress consist of all the O&M and procurement resources needed to support DOD’s consolidated medical activities. According to DOD, the budget estimates are based on the continued refinement and application of a managed care strategy and methodology used to produce DOD’s health care services for eligible beneficiaries. Operating under the Assistant Secretary of Defense (Health Affairs), TMA is responsible for formulating the DHP budget request and for managing DOD’s CHAMPUS and MCS contracts. The Surgeons General of the Army, Navy, and Air Force are responsible for the budget execution of decentralized medical activities such as direct MTF patient care. The DHP O&M budget request consists of a single budget activity—administration and servicewide activities. Each year, DOD provides detailed DHP budget information to the Congress in “justification materials” that show amounts requested for each of the 7 subactivities that encompass 34 program elements (see table 1). While the Congress appropriates DHP O&M funds as a single lump sum, its budget decision is based on the DHP budget request presented at the subactivity and program element levels. Since 1994, the Congress has generally appropriated more for DHP O&M expenses than DOD requested (see fig. 1). Committee reports may specify relatively small amounts of funding for such items as breast cancer and ovarian cancer research, which DOD then obligates through the appropriate account in accordance with congressional direction. Other than the funds specifically earmarked by the Congress, DOD has the latitude to allocate its congressional appropriation as needed to meet estimated subactivity and program element requirements. Between 1994 and 1999, DOD allocated most appropriations to direct care (primarily MTF patient care) and to purchased care (primarily CHAMPUS and MCS contracts). Table 2 shows the allocation of DHP appropriations by subactivity (see tables I.1 and I.2 for detailed information on DHP budget requests, budget allocations, and actual or currently estimated obligations between fiscal years 1994 and 1999). The Congress appropriated $48.9 billion for DHP O&M expenses between fiscal years 1994 and 1998. During budget execution, DOD obligated about $4.8 billion differently—as either increases or decreases—from its budget allocations for the various subactivities (see table 3). Obligations differed particularly for the direct care and purchased care subactivities. However, the magnitude of the funding adjustments has diminished in recent years, dropping to about $283 million in fiscal year 1998 from a peak of almost $1.5 billion in fiscal year 1995. Because the Congress makes a lump-sum appropriation, under DOD regulations and informal arrangements with the Congress, these adjustments did not require congressional notification or approval. The largest funding adjustments occurred in the direct care and purchased care subactivities. Between 1994 and 1998, DOD allocated $21.2 billion from the final DHP appropriation for purchased care but obligated only $19.1 billion, allowing DOD to reallocate $2.0 billion into such areas as direct patient care, information management, and base operations. For example, between 1994 and 1995, DOD increased obligations for direct care at MTFs by $876.3 million above the allocation. Between 1994 and 1996, DOD obligated about $289.5 million more than it had allocated for the information management subactivity. Also, funding for the base operations subactivity—which includes such items as repairs and maintenance on MTF facilities—received an increase of $479.6 million over the budget allocation between 1994 and 1997. (Table I.4 details the funding increases and decreases for each subactivity and program element between fiscal years 1994 and 1998.) In each year between 1994 and 1998, DOD’s budget allocation for purchased care—which provided funds for CHAMPUS, the now-terminated CHAMPUS Reform Initiative contracts, and MCS contracts—exceeded obligations, as shown in figure 2. At the program element level, the largest adjustments within the purchased care subactivity occurred between 1994 and 1996, when DOD obligated $1.4 billion less than the budget allocation for the CHAMPUS program element (see table I.4 and fig. 3). In contrast, MCS contract budget allocations more closely matched obligations through 1996, when DOD implemented two of the then four awarded MCS contracts on time. In 1997 and 1998, however, when implementation of the last three contracts was delayed, MCS budget allocations exceeded obligations by $990 million. Because of the delays in starting up these contracts, most of the unobligated MCS contract funds were used to defray higher than anticipated CHAMPUS program obligations. According to DOD officials, between 1994 and 1998, DOD-wide budget pressures and major program changes—such as downsizing and the rollout of TRICARE managed care reforms—made it difficult to estimate and allocate resources between direct care and purchased care budgets. They emphasized that while they are directly responsible for appropriation amounts at the lump-sum level, they have flexibility to manage the health care delivery system. Therefore, in executing the DHP appropriation funds for patient care, such funds may flow from direct care to purchased care and vice versa. They believe this flexibility is critical to efficiently managing the military health care delivery system. DOD officials cited several interrelated reasons why DHP obligations differed from DOD’s budget allocations between fiscal years 1994 and 1998. These reasons also suggest why “shortfalls” in recent DHP budget requests have prompted congressional concerns about the process DOD uses to estimate and allocate the DHP budget. TMA, Health Affairs, and service budget officials made various internal budget policy choices that included a DHP budget strategy to fully fund purchased care activities within available funding levels. This strategy, coupled with general budget pressures, left less money with which to budget direct care and other DHP subactivity requirements (such as information management and base operations). To keep within the DOD-wide spending caps, the officials intentionally understated requirements for direct care and other subactivities in the DHP budget requests submitted to the Congress. This pattern of policy choices, which led budget officials to underestimate direct care budget requirements, is underscored by the congressional testimonies by the Assistant Secretary of Defense (Health Affairs) and the service Surgeons General—all of whom identified shortfalls in the past 3 years of DHP budget requests, 1997 through 1999. The shortfalls—that is, the difference between the Assistant Secretary’s and the Surgeons General’s views of their needs and the President’s budget submission—have raised congressional concerns over DHP budget requests and prompted both DOD and the Congress to offset the shortfalls in various ways (see table 4). In addition, TMA and service officials told us they have relied on DHP’s flexibility during budget execution to fund direct patient care with funds available and not needed for CHAMPUS and MCS contracts. TMA officials told us that forecasting health care costs for budgeting purposes is inherently challenging because the budget year starts about 18 months after DOD starts preparing DHP budget estimates and 8 months after the President submits the DHP budget request to the Congress. They commented that many conditions change, affecting their direct and purchased care estimates over these protracted periods. In our view, however, these comments do not explain the often large differences that have occurred between budget allocations—which are established after the congressional appropriation is actually received—and obligations, which follow almost immediately thereafter. DOD has the flexibility to allocate most of its congressional appropriations as needed among the various DHP subactivities. Despite this flexibility and even taking into account the minor impacts of other adjustments to DHP’s allocated budget amounts such as supplemental appropriations or reprogrammings, DHP obligations still varied significantly from the budget allocations reported to the Congress, calling into question DOD’s methods for estimating DHP budget requirements. TMA and Health Affairs budget officials told us that the DHP beneficiary population is largely undefined, leading to budget uncertainty. According to these officials, DOD has little control over where beneficiaries go to get their health care because MTFs and MCS contractors do not enroll most beneficiaries. TMA officials stated that, in formulating the DHP budget request, separate cost estimates for MTFs and MCS contracts are based on the best available information at the time. Although service officials told us they had developed higher direct care budget estimates—which TMA nonetheless chose to underfund in the final DHP budget requests—one official told us that the nonenrolled beneficiary population is a major impediment to submitting realistic DHP budget requests. Moreover, DOD’s capitation method (allocating MTF budgets on the basis of the number of estimated users of the military health system) has not kept pace with MTF cost increases for space-available care to nonenrolled beneficiaries for medical services and outpatient prescription drugs. Others have noted similar concerns about the lack of a clearly defined beneficiary population and the effect on DHP budgeting uncertainties. For example, in a 1995 report, the Congressional Budget Office (CBO) raised concerns that, even with TRICARE Prime’s lower cost-sharing features providing incentives, not enough beneficiaries would enroll, and DOD would continue to have difficulties planning and budgeting. For DOD to effectively predict costs and efficiently manage the system, CBO concluded that DOD would need a universal beneficiary enrollment system to clearly identify the population for whom health care is to be provided. CBO concluded that even under TRICARE, beneficiaries can move in and out of the system as they please, relying on it for all, some, or none of their care. DOD would have to continue its reliance on surveys to estimate how many beneficiaries use direct care and purchased care and to what extent DOD is their primary or secondary source of coverage. In previous reports, we also raised concerns about the budgetary uncertainties caused by less-than-optimal enrollment. Moreover, at the end of fiscal year 1998, we estimate that less than half of the 8.2 million DOD-eligible beneficiaries were enrolled. Thus, DOD’s budgeting uncertainties stem, in large measure, from its lack of a universal enrollment requirement. Higher than expected MTF costs in fiscal years 1994 and 1995 were given as another reason that DHP obligations differed from budget allocations, according to TMA, Health Affairs, and service officials. The budget savings projected to result from base closures (and reflected in their requests) were not achieved. Therefore, although the number of MTFs decreased by 9.5 percent between 1994 and 1998, DOD wound up obligating $726 million more for direct care than the amount allocated (see fig. 4). One service official told us that despite MTF downsizing, the number of beneficiaries going to MTFs has not dropped, thus sustaining a high level of demand for MTF health care. But MTF inpatient and outpatient workload data reported to the Congress in DOD’s annual justification materials indicate that MTF inpatient and outpatient workload declined by a respective 54.5 percent and 26 percent between 1994 and 1998. However, DOD and TMA officials cautioned us that the MTF workload data are not accurate. Yet, a May 1998 DOD Inspector General audit report (on the extent to which managed care utilization management savings met Health Affairs’ expectations as reflected in its DHP budgets found a significant reduction in inpatient and outpatient workload at 15 large MTFs from fiscal year 1994 through 1996, but no corresponding decrease in operating costs. DOD’s Inspector General attributed the cause to MTFs generally increasing their military medical staffing and infrastructure costs (real property maintenance, minor construction, and housekeeping). And, according to the Inspector General, it is especially difficult to reduce operating costs when workload is reducing without decreasing military medical staffing. TMA, Health Affairs, and service officials also told us that several interrelated factors had made purchased care obligations significantly lower than the allocated amounts between 1994 and 1998. First, they did not fully account for savings from rate changes in the CHAMPUS maximum allowable charge (CMAC) for physician payments. DOD officials told us that during this period, CHAMPUS budget requests and allocations did not account for $408 million to $656 million in estimated 3-year CMAC savings between 1994 and 1996. For fiscal years 1997 to 1998, DOD has estimated that CMAC saved $1.5 billion in CHAMPUS and TRICARE contract costs. Given that DHP purchased care budget requests and allocations track more closely with obligations in 1997 and 1998, it appears TMA better accounted for CMAC savings. Second, DOD officials cited a factor related to their budget strategy of conservatively estimating purchased care costs. After an earlier history of CHAMPUS budget shortfalls, DOD changed its budget strategy from not fully funding CHAMPUS to ensuring CHAMPUS was fully funded. However, they noted that an actuarial model for projecting CHAMPUS costs, which was used to formulate the budget requests for fiscal years 1994 through 1996, greatly overestimated CHAMPUS requirements. Finally, with the CHAMPUS phase-out and the switch to MCS contracts, TMA and Health Affairs officials cited the need to fully fund these contracts in their budget request. According to these officials, their MCS budgeting strategy was essentially driven by the concern that if there were not enough funds allocated for the MCS contracts, an Antideficiency Act violation could occur. We do not see, however, how requesting the amount of funds DOD anticipates the contracts will actually cost could trigger an Antideficiency Act violation. Budget requests, even where they fail to fully fund an activity, do not cause such violations. One of the ways an Antideficiency Act violation could occur is if DOD continued to pay additional amounts under the contract and overobligated or overexpended the appropriation or fund account related to the contract. In such a case, the proper response would be to reprogram funds and/or seek additional appropriations in advance of any such potential deficiency. In other words, should funds allocated for the MCS contracts appear to be inadequate, DOD would find itself in essentially the same position as any agency that anticipates running short of funds. Only if DOD officials continued to make additional payments under the contract knowing that appropriations for them were not available would there be an Antideficiency Act violation. Looking ahead, DOD officials pointed out that the amount of funds shifted between DHP subactivities had fallen in 1997 and 1998, and they anticipated that volatility within the purchased care subactivity would also decrease now that all seven MCS contracts have been implemented. Officials also stated that TMA has established new resource management controls. A quarterly workgroup process, for example, refines CHAMPUS and MCS contract requirements and identifies associated DHP-wide adjustments that can be used to formulate future budget estimates. They stated that these procedures represent significant improvements in their ability to precisely project direct care and purchased care requirements. They acknowledged, however, that the next round of MCS contracts will be awarded and administered differently than the first round and that their integrated care system, with its largely nonenrolled beneficiary population, is inherently difficult to budget for. Thus, funding changes during budget execution are nearly inevitable. The movement of DHP funds between subactivities does not require prior congressional notification or approval. While the Congress must be notified in many cases when DOD transfers or reprograms appropriated funds, these reporting rules do not apply to the movement of funds among DHP subactivities. As a result, sizeable funding changes have occurred without specific notification. Refinements to the reporting process would put the Congress in a better position to be aware of funding changes. Under procedures agreed upon between congressional committees and DOD, funds can be obligated for purposes other than originally proposed through transfers and reprogrammings. Reprogramming shifts funds from one program to another within the same budget account, while a transfer shifts funds from one account to another. According to the Congressional Research Service, DOD uses the term “reprogramming” for both kinds of transactions. DOD budgetary regulations, reflecting instructions from the appropriations committees, distinguish among three types of reprogramming actions: 1. Actions requiring congressional notification and approval, including (a) all transfers between accounts, (b) any change to a program that is a matter of special interest to the Congress, and (c) increases to congressionally approved procurement quantities; 2. Actions requiring only notification of the Congress, including reprogramming that exceeds certain threshold amounts; and 3. Actions not requiring any congressional notification, including reprogramming below certain threshold amounts and actions that reclassify amounts and actions within an appropriation without changing the purpose for which the funds were appropriated. For example, DOD is required to notify the Congress if it shifts funds from the DHP O&M to the DHP procurement component. But the notification requirements do not apply when funds move from one DHP subactivity to another (such as from purchased care to direct care) or between DHP program elements (such as from MCS contracts to CHAMPUS, both within the purchased care subactivity) because such movements are within the same budget activity (administration and servicewide activities). Thus, the movements do not represent a change in the purpose for which the funds were appropriated and fit under the third type of reprogramming procedures. To help increase the visibility of DOD funding changes, the reports accompanying recent defense appropriations acts have directed DOD to provide congressional defense committees with quarterly budget execution data on certain other O&M accounts. For example, in fiscal year 1999, DOD is directed to provide data for each budget activity, activity group, and subactivity not later than 45 days past the close of each quarter. These reports are to include the budget request and actual obligations and the DOD distribution of unallocated congressional adjustments to the budget request, as well as various details on reprogramming actions. This type of timely information supports congressional oversight of DOD O&M budget execution and shows the extent to which DOD is obligating O&M funds for purposes other than the Congress had been made aware of. Under current procedures, DHP obligations are reported at the subactivity and program element levels in the prior-year column when DOD submits its budget request justification material to the Congress. However, such information is not reported in a manner that allows easy comparison with the prior year’s budget allocations, and thus does not facilitate oversight of funding changes that took place during budget execution. Reprogramming notification regulations do not apply when funds shift from one DHP subactivity to another, and congressional committees have not directed DOD to report DHP O&M budget execution data in the same manner as other O&M accounts. The information needed to support congressional notification or quarterly budget execution reports is now readily available because DOD officials have instituted their own internal reviews to better track DHP budget execution. For example, DOD now requires internal quarterly budget execution reports from the services to document the shift of funds between subactivities. Therefore, we discussed with DOD officials potential reporting changes that would facilitate congressional oversight of DHP funding adjustments during budget execution. DOD officials told us that subjecting the lump-sum DHP appropriation to the reprogramming procedures that require prior approval from the Congress would eliminate flexibility, making it very difficult to manage the finances of the integrated MTF and MCS contract health care system. However, in our view, subjecting the DHP appropriation to reprogramming procedures for notification, but not prior approval, to the Congress whenever funds above a certain threshold shift from one DHP subactivity to another would not diminish DOD’s flexibility. DOD officials agreed that congressional oversight would be enhanced by quarterly budget execution reports on DHP obligations by subactivity and program element. Depending on where the threshold was set and the extent to which special interest DHP subactivities were designated for reporting, notification could involve fewer reports than a quarterly reporting process for DHP subactivities and program elements. Thus, in our view, notification may well offer a less burdensome means of facilitating congressional oversight of DHP funding changes during budget execution. DOD officials expect future DHP obligations to track more closely with budget requests and allocations, while acknowledging that some movement of funds is inevitable given the lack of a universally enrolled beneficiary population for direct and purchased care. Although DOD is not required to adhere to its own budget requests or reported budget allocations when it obligates funds, in our view, a repeated failure to do so without providing sufficient justification could cause the Congress to question the validity of DHP budget requests. The Congress, however, will not be made aware of improvements or continuing funding adjustments unless DOD begins to either notify or report to congressional committees on how it obligates DHP appropriations. In our view, and DOD agrees, additional information on how obligations differ from budget requests and allocations would improve oversight by the Congress and DOD. Since TMA officials already require quarterly budget execution reports to improve their internal budget oversight and budget decisionmaking, DOD would not be burdened by notifying or reporting similar information to the Congress. Such notification or reporting could provide the Congress with a basis for scrutinizing DHP budget request justifications and determining whether additional program controls—such as a universal requirement that all beneficiaries enroll in direct care or purchased care components—are needed. The Congress may wish to consider requiring DOD, consistent with current notification standards and procedures, to notify the congressional defense committees of its intent to shift funds among subactivities (such as direct care, purchased care, and base operations). Such notification, while not requiring congressional approval of the funding shift itself, could be initiated whenever the amount of the funding shift exceeded a certain threshold to be determined by the Congress. The notification would specify where funds are being deducted and where they are being added, and the justification for such reallocation. Also, or alternatively, the Congress may wish to consider requiring DOD to provide congressional defense committees with quarterly budget execution data on DHP O&M accounts. These data could be provided in the same manner and under the same time frames as DOD currently provides data for non-DHP O&M accounts. In its comments on a draft of the report, DOD concurred with the report and its focus of making the DHP funding more visible to the Congress. DOD further agreed that providing additional budget execution data to the Congress, on a regular basis, would be a valuable step toward keeping congressional members informed about the military health care system’s financial status. Finally, DOD agreed to modify its current process for internally reporting DHP obligations to report DHP O&M budget execution data to the Congress in the same manner as the non-DHP O&M accounts. However, DOD did not support requiring it to notify congressional defense committees of its intent to shift funds among DHP subactivities. DOD stated that such notification could potentially limit its ability to obligate DHP funds and affect beneficiaries’ timely access to health care. We disagree. As we point out, such notification would not require prior approval of the funding shift itself, but would be initiated whenever the funding shift exceeded a certain amount to be determined by the Congress. These and other details of the notification procedure could be worked out between congressional committees and DOD to further ensure that DOD’s ability to obligate funds for the timely delivery of health care services was not impaired. Further, as the report points out, notification could involve fewer reports than a quarterly reporting process for DHP subactivities. Thus, in our view, notification may well offer a less burdensome means of facilitating congressional oversight of DHP funding changes during budget execution. DOD also suggested several technical changes to the draft, which we have incorporated where appropriate. DOD’s comments are presented in their entirety in appendix II. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies to Senator Wayne Allard, Senator Robert C. Byrd, Senator Max Cleland, Senator Daniel K. Inouye, Senator Carl Levin, Senator Ted Stevens, Senator John Warner, Representative Neil Abercrombie, Representative Steve Buyer, Representative John P. Murtha, Representative David Obey, Representative Ike Skelton, Representative Floyd Spence, and Representative C.W. Bill Young in their capacities as chairman or ranking minority member of Senate and House committees and subcommittees. We will also send copies at that time to the Honorable William S. Cohen, Secretary of Defense; the Honorable William J. Lynn, III, Under Secretary of Defense (Comptroller); the Honorable Sue Bailey, Assistant Secretary of Defense (Health Affairs); and the Honorable Jacob J. Lew, Director, Office of Management and Budget. Copies will be made available to others upon request. If you or your staff have any questions concerning this report, please contact Stephen P. Backhus, Director, Veterans’ Affairs and Military Health Care Issues, on (202) 512-7101 or Daniel Brier, Assistant Director, on (202) 512-6803. Other contributors to this report include Carolyn Kirby (Evaluator-in-Charge), Jon Chasson, Craig Winslow, and Mary Reich. Table I.1: Defense Health Program Budget Requests, Budget Allocations, and Actual Obligations, Fiscal Years 1994-96 $2,583,114 $2,592,596 $3,062,708 $2,706,329 $2,658,394 $2,988,546 $3,035,259 $3,026,670 $2,954,594 $2,923,325 $2,933,625 $3,453,467 $3,098,704 $3,056,769 $3,413,238 $3,454,685 $3,452,996 $3,412,121 $4,325,682 $4,378,006 $3,771,326 $4,478,287 $4,508,287 $3,781,168 $4,267,097 $4,267,097 $3,720,333 (continued) (continued) $9,080,538 $9,326,635 $9,344,210 $9,613,331 $9,591,331 $9,625,162 $9,865,525 $9,886,961 $9,867,636 The TRICARE Support Office program element incorporated only Office of CHAMPUS costs in these years. Medical centers, hospitals, and clinics—CONUS Medical centers, hospitals and clinics—OCONUS Armed Forces Institute of Pathology 79,457 (continued) Following congressional approval of funds for Defense Health Program (DHP) operations and maintenance (O&M) expenses enacted through the annual appropriations act, various other actions by DOD or the Congress result in further adjustments. These adjustments can increase or decrease the total obligational authority available to DOD for DHP O&M expenses. Table I.3 details the other adjustments. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the apparent discrepancies between the Department of Defense's (DOD) budget allocations and the actual obligations for direct and purchased care, focusing on: (1) the extent to which the Defense Health Program (DHP) obligations have differed from DOD's budget allocations; (2) the reasons for any such differences; and (3) whether congressional oversight of DHP funding changes could be enhanced if DOD provided notification or budget execution data. GAO noted that: (1) between fiscal years 1994 and 1998, Congress appropriated $48.9 billion for DHP operations and maintenance (O&M) expenses; (2) during that period, DHP obligations at the subactivity level, particularly for direct and purchased care, differed in significant ways from DOD's budget allocations; (3) in total, about $4.8 billion was obligated differently--as either increases to or decreases from the budget allocations DOD had developed for the 7 DHP subactivities; (4) these funding changes occurred because of internal DOD policy choices and other major program changes; (5) according to DOD, its strategy was to fully fund purchased care activities within available funding levels; (6) this strategy left less to budget for direct care and other DHP subactivities; (7) TRICARE Management Activity officials also told GAO that because the DHP has both direct and purchased care components, whereby many beneficiaries can access either system to obtain health care, it is difficult to reliably estimate annual demand and costs for each component; (8) between 1994 and 1996, purchased care obligations were $1.9 billion less than allocated because of faulty physician payment rate and actuarial assumptions; (9) between 1994 and 1998, direct patient care obligations amounted to $1 billion more than DOD had allocated--during a period of base closures and military treatment facility downsizing--largely because DOD understated estimated direct care requirements; (10) also, between 1996 and 1998, DOD overestimated TRICARE managed care support (MCS) contract costs, believing that contract award prices would be higher and implementation would begin sooner than what occurred; (11) thus, most of the unobligated MCS contract funds were used to defray higher than anticipated Civilian Health and Medical Program of the Uniformed Services obligations; (12) the movement of DHP funds from one subactivity to another does not require prior congressional notification or approval; (13) as a result, these sizable funding changes have generally occurred without congressional awareness; (14) now that the MCS contracts are implemented nationwide, DOD officials expect future DHP obligations to track more closely with budget allocations; and (15) current law and regulations will continue to allow DOD the latitude to move funds between subactivities with little or not congressional oversight.
Historically, Bolivia, Colombia, and Peru have been major drug-producing countries. Together, they account for most of the coca cultivated worldwide and for the opium poppy used to produce most of the heroin seized on the east coast of the United States. Figure 1 shows the areas in Bolivia, Colombia, and Peru where illicit drug crops are grown. The United States has supported counternarcotics efforts in Bolivia and Peru for nearly 30 years. USAID has implemented a series of alternative development projects in the coca-producing regions of these countries,while the U.S. Drug Enforcement Administration and State’s Bureau for International Narcotics and Law Enforcement have supported interdiction and voluntary and forced coca eradication programs. Due at least in part to these efforts, substantial reductions in coca cultivation were achieved in Bolivia and Peru during the mid-to-late 1990s. However, over the same period, coca cultivation in Colombia increased substantially, offsetting much of the decreases in Bolivia and Peru (see table 1). Alternative development progress in Bolivia and Peru has required a lasting host government commitment to a broader set of counternarcotics measures and years of sustained U.S. assistance to support these efforts. More specifically, our analysis of project documentation, site visits, and discussions with U.S. and host government officials and project staff indicated that government control of drug-growing areas and project sites is essential for providing access to the targeted beneficiaries as well as security for project-related trade, commercial activity, and investment. It also enables the monitoring of compliance with voluntary eradication agreements. To promote and sustain coca cultivation reductions, the host government must have a strong commitment to carry out effective interdiction and eradication policies. Without interdiction and eradication as disincentives, growers are unlikely to abandon more lucrative and easily cultivated coca crops in favor of less profitable and harder to grow licit crops or to pursue legal employment. Further, alternative development, interdiction, and eradication efforts must be carefully coordinated to achieve mutually reinforcing benefits. Table 2 summarizes several of the key lessons learned in Bolivia and Peru. Descriptions of the programs in Bolivia and Peru and more detailed discussions of the lessons learned from them are in appendixes I and II, respectively. Alternative development progress in Bolivia and Peru has required years of sustained U.S. assistance. The United States has supported alternative development projects in these countries for two decades. Together with current and planned alternative projects in Bolivia and Peru, U.S. contributions to these programs total about $455 million. Other U.S. agencies have supported interdiction and eradication efforts in Bolivia and Peru for an even longer period—nearly three decades. In combination, these programs have helped achieve reductions in the amount of coca grown in these countries. Nonetheless, the host government agencies involved in these efforts continue to depend heavily on U.S. support. For example, according to USAID officials, the United States currently finances and indirectly oversees most of the Bolivian government’s alternative development agencies in the Chapare region because the Bolivian government does not have the resources to do so on its own. Similarly, the United States provides 60 to 70 percent of the total funding for the Peruvian alternative development agency, and USAID officials said that the Peruvian government would not be able to fund the agency’s activities without U.S. support. Table 3 shows past and current U.S. funding for alternative development programs in Bolivia and Peru. Alternative development efforts in Colombia are still at an early stage, and USAID will have difficulty spending all of the funds available for these activities. Initial USAID efforts in Colombia began in 2000 by focusing on promoting poppy eradication and strengthening the Colombian government’s alternative development institution (PNDA). USAID’s current program emphasizes alternative development efforts in the coca- growing regions of southern Colombia to complement other U.S.- supported counternarcotics activities there. Alternative development activities in both poppy- and coca-growing areas are just beginning. As of September 30, 2001, USAID had spent only about $5.6 million, or 11 percent, of the $52.5 million currently available. By September 30, 2002, USAID expects its cumulative actual expenditures to reach $31.8 million, or 61 percent, of the total available in fiscal year 2001. As part of its initial effort to support the eradication of 3,000 hectares of poppy by the end of 2002, USAID awarded a $10 million contract to Chemonics International, Inc., in June 2000. Chemonics’ role was to assist PNDA in implementing poppy-related alternative development activities by promoting crop substitution, environmental improvements, and other development efforts in the poppy-growing regions of Cauca, Huila, Tolima, and Narino. After funding for Plan Colombia was approved in July 2000, USAID began planning for alternative development in Colombia’s coca-growing areas. These efforts were intended to complement the eradication and interdiction components of Plan Colombia’s first phase—the “push” into the Putumayo and Caqueta departments of southern Colombia where coca cultivation is most heavily concentrated. To quickly launch these efforts, USAID reallocated $1 million of the $10 million originally intended to support poppy eradication to fund projects aimed at strengthening PNDA’s capacity to expand its activities in coca-growing areas. USAID’s projects targeted PNDA’s information technology, financial accountability, telecommunications systems, and public relations capabilities for improvement. In April 2001, using Plan Colombia funds, USAID awarded an $87.5 million, 5-year contract to Chemonics to oversee, administer, and carry out alternative development activities in the coca-growing areas in the Putumayo and Caqueta departments. To date, USAID has programmed $42.5 million of this amount. Though it will work collaboratively in reviewing and approving alternative development projects, USAID (through Chemonics) will fund some projects and PNDA plans to fund others. In addition, USAID and Chemonics continue to support poppy eradication and institutional strengthening of PNDA. The overall alternative development approach in Colombia entails reaching agreements with communities to voluntarily eradicate illicit crops in exchange for help finding other income-producing opportunities and other assistance. The program is intended to provide incentives for small farmers (with 3 hectares or less of coca) to voluntarily eradicate their coca plants. In negotiating the community pacts, PNDA representatives met with groups of small farmers to obtain their commitment to voluntarily eradicate the illicit crops. After an eradication pact was signed, PNDA planned to provide the farmers with food crop seeds and plants or other immediate assistance. Once this assistance began, farmers were obliged to eradicate their illicit crops within 1 year. According to USAID officials, Colombian government officials recently stated that most of the coca cultivation covered by the pacts already agreed to should be voluntarily eradicated by the end of July 2002. As eradication progresses, farmers are to receive more comprehensive assistance from USAID. Initial efforts are focused on municipalities in the Putumayo, where USAID plans to support crop substitution and other income-generating activities by providing agricultural incentives, modern production and processing expertise, and credit and marketing assistance. USAID also plans to support environmental improvements through tree- growing programs in remote indigenous and tropical areas and training in pest control, forest management, and other areas. In addition, USAID plans to improve the social infrastructure in project areas by enhancing access to schools, health services, potable water, sewerage, and electricity. In August 2001, USAID reported that its goal is the voluntary eradication of 11,500 hectares of coca grown on small farms by the end of 2002, with the aim of eliminating a total of 30,000 hectares by 2005. USAID also reported that approximately 33 community eradication pacts had been signed, which covered more than 37,000 hectares of coca in the Putumayo department. In its initial design plan, USAID also noted that sustainability will be measured in terms of permanent eradication of coca and the number of farm families permanently engaged in licit productive activities and not returning to coca cultivation. USAID alternative development project activities have been limited to date, and the pace is not expected to quicken significantly until 2002. As illustrated in table 4, of the $10 million originally programmed to support poppy eradication and institutional strengthening of PNDA, USAID’s actual expenditures were only about $1.3 million, or 13 percent, as of September 30, 2001. Of the $42.5 million programmed from Plan Colombia funding, USAID’s actual expenditures were only about $4.4 million, or 10 percent, as of the same date. Combined, actual expenditures were about $5.6 million, or about 11 percent, of the $52.5 million in total available program funds. USAID officials told us that they expect project activity to accelerate in 2002. They estimate that cumulative actual expenditures in 2002 will total about $31.8 million. While USAID expects increased project activities in 2002, these activities will continue to be limited when viewed in the context of the total funding that is likely to be available for them. The administration had requested an additional $60.5 million for such activities in fiscal year 2002, but the Congress reduced the overall administration request for its Andean Counternarcotics Initiative from $731 million to $625 million, a reduction that will likely result in less funding for alternative development. As noted, USAID officials expect cumulative actual expenditures for alternative development activities in Colombia to total about $31.8 million by September 30, 2002—about 61 percent of the amount appropriated through fiscal year 2001. USAID faces a number of serious challenges in implementing a successful alternative development program in Colombia. USAID planning documents for Colombia acknowledge specific lessons learned in Bolivia and Peru and note that overcoming obstacles in Colombia will require long-term U.S. and Colombian commitments. The experiences in Bolivia and Peru demonstrate the need for host government control and security in project areas; effective interdiction operations; and careful coordination of eradication, interdiction, and alternative development efforts. However, the Colombian government does not control large parts of the coca-growing areas, limiting its ability to carry out sustained interdiction operations, and the Colombian government’s ability to effectively coordinate eradication and alternative development activities remains uncertain. Apart from these challenges, Colombia faces additional obstacles in implementing the alternative development program. Colombia has not devised a means to verify or ensure compliance by farmers participating in voluntary eradication programs, PNDA is weak and its funding for alternative development projects is not ensured, and project sites are in remote coca-growing areas where the soil quality and infrastructure are poor. The experiences in Bolivia and Peru indicate that the most critical obstacle Colombia faces is that the government does not control large parts of the Putumayo and Caqueta departments in southern Colombia where much of the coca is grown. This lack of security will seriously hamper PNDA’s ability to develop the region’s infrastructure, establish viable and reliable markets for licit products, and attract the private investment needed for long-term, income-generating development. Without government control of project sites, narcotics traffickers and guerrilla forces will continue to profit from illicit drug operations and impede legal economic activities generated by alternative development programs. USAID officials told us that armed groups have already intimidated some farmers and municipal leaders cooperating with the Colombian government. More recently, in September 2001, four employees of Colombian nongovernmental organizations working with PNDA in the Putumayo were kidnapped. According to USAID officials, two are confirmed murdered and the other two were released. As a result of these incidents, a number of nongovernmental organizations working with PNDA temporarily suspended their activities in the Putumayo in October 2001. While Colombia uses aerial spray operations to carry out an active eradication program, the government’s lack of control over many coca- growing areas limits its ability to carry out sustained ground-based interdiction operations—an essential component of the successful efforts in Bolivia and Peru. Colombian military and law enforcement units destroy some cocaine laboratories and seize narcotics and precursor chemicals during individual counternarcotics operations; however, they lack sufficient forces to maintain the permanent presence to sustain such operations on a day-to-day basis. Further complicating the problem is that a large land area ceded to one of the guerilla groups is off limits to U.S. and Colombian agencies, but is reportedly an increasing source of coca and precursor supplies. Throughout these areas, insurgents and paramilitaries operate largely with impunity. The experiences in Bolivia and Peru showed that sustained interdiction operations are necessary to disrupt coca markets and thus produce declines in the prices of coca. Without these declines, alternative development efforts are not as effective. The Colombian government’s ability to effectively coordinate eradication and alternative development activities remains uncertain. Careful coordination of these efforts was critical to their effectiveness in Bolivia and Peru. In December and February 2000, while conducting aerial eradication operations, the Colombian National Police accidentally sprayed approximately 600 to 700 hectares of an area where communities were negotiating pacts for participation in alternative development. Also, PNDA officials told us that eradication authorities had sprayed most of the Bolivar department, even though PNDA had targeted some communities in the department for participation in the alternative development program. This will likely complicate PNDA’s relations with farmers in that region. According to USAID officials, PNDA representatives currently coordinate with the Colombian National Police by indicating on a map or from an airplane the areas in the Putumayo and Caqueta departments that are in the alternative development program and should not be sprayed. Among the additional obstacles facing Colombia is the difficulty of verifying compliance with voluntary eradication pacts. The Colombian government has not determined how it will do so, and thus the reliability of the voluntary eradication pacts is uncertain. PNDA officials predict that it will be problematic and expensive to monitor compliance—a task complicated by the Colombian government’s lack of control over project sites. Until a means of verifying compliance is devised, compliance will depend upon peer pressure within a given community to prevent individuals from breaking the community’s eradication agreement with the government. Weak host-country institutions pose an additional problem in Colombia. USAID originally intended to work through the Colombian International Cooperation Agency as a host-country contracting agency for its alternative development projects. However, USAID officials told us they did not have confidence that the Colombian agency could account for the assistance in accordance with USAID requirements. USAID chose instead to contract with Chemonics to manage program resources, including procuring goods and services and awarding and managing grants. Chemonics is working on a day-to-day basis with PNDA—the institution established by the Colombian government in 1995 to deal specifically with alternative development. As noted, USAID was required to focus its initial efforts on strengthening PNDA because the organization is institutionally weak. USAID officials said that PNDA may have difficulty effectively using the additional funding that it is projected to receive for alternative development projects. While PNDA may have trouble absorbing this additional funding, the institution will have difficulty carrying out its responsibilities without it. Yet funding for important components of PNDA’s alternative development plans—from making infrastructure improvements to promoting licit crops and livestock—is not ensured. PNDA is supposed to provide immediate, short-term support to farmers cooperating in alternative development programs, bridging the gap between the signing of voluntary eradication agreements and receiving USAID assistance. Colombia developed these plans based on the expectation that it would receive about $300 million from European donors. However, little of that assistance has materialized to date. U.S. embassy officials told us that European donors are reluctant to participate in the program because, based on experiences in Bolivia and Peru, they associate it with the U.S.-supported forced eradication effort in Colombia. The poor quality of the soil and infrastructure and the remoteness of project sites in coca-growing areas are further obstacles. Unlike the poppy-growing areas in northern Colombia—which have richer soils and better developed infrastructure and are closer to markets—much of the coca-growing areas in southern Colombia have soils that are poorly suited for licit crops and a lack of basic infrastructure. According to USAID officials, these problems are more severe in the coca-growing areas of Colombia than they were in counterpart areas of Bolivia and Peru. Even when suitable crops are identified, the distances involved make it difficult to transport produce for further processing or to potential markets. For instance, a palmito (heart of palm) canning plant that the United Nations Drug Control Program built in the Putumayo department in the mid-1990s sat dormant for a number of years because the farmers growing the palm were too far away to transport their produce to the plant before it spoiled. The plant recently opened for test runs after finding farmers closer to the plant to grow the palm. Alternative development requires a long-term commitment and must be implemented with strong host-government support for sustained interdiction and eradication. The United States has provided alternative development assistance to Bolivia and Peru for nearly two decades, but little progress was made until the host government gained control of drug- growing areas and project sites, demonstrated a strong commitment to carry out effective interdiction and eradication policies, and carefully coordinated these efforts to achieve mutually reinforcing benefits. While each of these components is important, none is more so than government control of the project areas. Experience in Bolivia and Peru strongly suggests that voluntary coca eradication in Colombia is not likely to achieve hoped for reductions in coca cultivation until, at a minimum, the Colombian government can provide the security in the coca-growing regions that is essential for carrying out sustained interdiction and eradication operations, providing safe access to alternative development project sites, and attracting the private investment needed for long-term income-generating development. Considering the serious obstacles in Colombia that have impeded meaningful progress, USAID will have difficulty spending additional funds for alternative development over the next few years. Through fiscal year 2001, USAID has spent less than 11 percent of the $52.5 million available for alternative development in Colombia and does not plan to complete expenditure of these funds until at least fiscal year 2003. Nevertheless, USAID’s alternative development program documentation for Colombia still calls for dramatic reductions in coca cultivation in fiscal year 2002 through widespread voluntary eradication of coca crops by farm families who want to take advantage of alternative development assistance. Yet, few projects have been undertaken by USAID in the coca-growing regions. Because USAID faces serious obstacles to achieving widespread voluntary coca eradication in Colombia, we recommend that the USAID administrator update USAID’s project plans and spending proposals for coca elimination in Colombia to take into account the extreme difficulty in gaining access to the coca-growing regions to ensure that funds are used as effectively as possible. Because of the serious obstacles impeding alternative development in Colombia, the Congress should consider requiring that USAID demonstrate measurable progress in its current efforts to reduce coca cultivation in Colombia before any additional funding is provided for alternative development. USAID and State provided written comments on a draft of this report (see apps. III and IV, respectively). Both generally concurred with the report’s observations and conclusions. USAID noted that the report was thorough and accurate and emphasized that alternative development can only be implemented in coordination with complementary eradication and interdiction programs. USAID also generally concurred with our recommendation to the administrator to update its alternative development plans for Colombia and noted that it has already begun such a review as part of its normal performance management process. State said that the report was thoughtful and thorough and acknowledged the majority of our conclusions regarding the obstacles facing alternative development efforts in Colombia. State agreed with the report’s overall conclusion that careful coordination among alternative development, interdiction, and eradication programs is essential. It also provided further explanation of its aerial eradication program and the difficulties it has encountered in Colombia, including additional information about the accidental spraying of an alternative development project area. However, State said that it believes it is appropriate and constructive for the spraying of illicit coca to be conducted before alternative development programs are initiated in an area and suggested that the report implies a recommendation that aerial eradication and alternative development should not be conducted in the same location. We do not agree with State that the report implies such a recommendation. In fact, we cite the need for coordinating alternative development with interdiction and eradication efforts as one of the chief requirements for success. To determine the lessons learned in providing alternative development assistance to Bolivia and Peru, we interviewed cognizant officials and analyzed program documentation. Specifically, In Washington, D.C., we interviewed officials in USAID’s Office of South American Affairs and State’s Bureau for International Narcotics and Law Enforcement. We also met with officials at the two major USAID contractors that provided alternative development services in Bolivia and Peru—Development Alternatives, Inc., and Winrock International, Inc. In addition, we reviewed USAID project design and evaluation documents, contractor performance reports, and program audits. From our analysis, we determined key goals and accomplishments for the alternative development programs in Bolivia and Peru. In Bolivia and Peru, we interviewed USAID mission, U.S. embassy, host- government, and nongovernmental organization officials. We also made site visits to selected project sites and met with project beneficiaries in both countries. From our analysis, we identified critical elements that facilitated or impeded the alternative development efforts in these countries. To determine the current status of USAID’s alternative development efforts in Colombia and the challenges faced there, we interviewed cognizant officials and reviewed program planning and financial documents. Specifically, In Washington, D.C., we interviewed officials in USAID’s Office of South American Affairs and State’s Bureau for International Narcotics and Law Enforcement and analyzed USAID program plans and expenditure data to determine the progress of USAID’s efforts in Colombia. In Colombia, we interviewed USAID mission, U.S. embassy, United Nations Drug Control Program, and host-government officials, including the senior officers of PNDA—the Colombian alternative development institution. We also met with officials at Chemonics International, Inc.— the major USAID contractor for alternative development services in Colombia. In addition, we analyzed USAID, PNDA, and Chemonics project design documents and status reports. We compared the factors that impeded or facilitated alternative development in Bolivia and Peru with Colombia’s situation to identify the critical challenges faced there. We performed our work from January through December 2001 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to interested congressional committees, the secretary of state, and the administrator of USAID. Copies also will be made available to other interested parties upon request. If you or your staff have any questions concerning this report, please call me at (202) 512-4268. An additional GAO contact and staff acknowledgments are listed in appendix V. The United States has provided alternative development assistance to Bolivia for nearly two decades, but little progress was made until the Bolivian government controlled the project areas and demonstrated a strong commitment to coupling alternative development with other counternarcotics measures. The U.S. Agency for International Development (USAID) has funded four alternative development projects in Bolivia since 1983. The first three projects took place between 1983 and 1998 at a cost of about $117 million. These projects sought to displace the coca-based economy in the Chapare—Bolivia’s primary illicit coca- growing area (see fig. 2). However, a lack of security and eradication in project areas hampered the program’s achievements. The U.S. government also supported an unsuccessful Bolivian government program that paid farmers not to grow coca. USAID officials in Bolivia estimate that the Bolivian government used about $100 million in U.S. economic support funds to pay these compensation costs between 1987 and 1998. In contrast, strong Bolivian government support for eradication since 1997 has resulted in greater success for USAID’s fourth and current alternative development project—the Counternarcotics Consolidation of Alternative Development Efforts Project, currently estimated to cost $112 million. However, Bolivia faces challenges in implementing alternative development because the central government coalition is weakening, and as the national elections scheduled for 2002 approach, its commitment to eradication has become uncertain. USAID officials have identified a number of lessons from its alternative development program in Bolivia, many of which are relevant to USAID’s alternative development program in Colombia. For example, the success of its efforts in Bolivia depended on government control of the project areas and a secure environment, the political commitment of the Bolivian government to eradicate illicit coca, and coordination between eradication and alternative development efforts. The three USAID projects implemented before 1997 sought to develop the Chapare and displace the coca economy there. They aimed to promote coca substitution and improve farmer productivity and land use; provide infrastructure, including roads, electricity, and potable water systems; and displace the coca economy by increasing farmer profitability, private investment, market access for project-supported crops, and legal employment opportunities. However, a lack of government control and eradication in the project area limited the projects’ results. Nevertheless, according to USAID and Bolivian government officials, these projects helped lay the groundwork for better results from USAID’s current alternative development project in Bolivia. In addition, in the late 1980s, the Bolivian government used cash provided by the United States to compensate farmers for not growing coca. Most observers consider this program a failure. The Chapare Regional Development Project, implemented between 1983 and 1992 at a cost of $22.5 million, was USAID’s first alternative development effort in the Chapare. Project goals were to stimulate balanced economic growth and improve living standards through public and private sector participation, a diversified economic base, and more equitable income distribution. However, because the Bolivian government lost control of the Chapare to narcotics traffickers in 1983, USAID limited project objectives primarily to coca substitution and redirected project resources to nearby valleys in an effort to stem the tide of immigration to the Chapare. The Electrification for Sustainable Development Project was implemented between 1991 and 1996 at a cost of about $15 million. The project aimed to increase the number of people receiving electricity, expand the use of electricity for rural industry and export-related activities that would provide jobs and alleviate poverty, and improve the operational standards of rural electric distribution. The project erected power poles, laid power lines, and built an infrastructure that served 26,700 newly established electrical connections—about 78 percent over target—in the total project area, which extended well beyond the Chapare. While these benefits may have facilitated subsequent development activities, project design documents stated that the project by itself would likely have little impact on shifting labor from coca production to legal activities. Accordingly, USAID project evaluations do not cite any coca reductions resulting from this project. The Cochabamba Regional Development Project was implemented between 1991 and 1997 at a cost of $79.5 million. Whereas the Chapare project focused largely on crop substitution, the Cochabamba project was an “economy substitution” project. The goal was to increase investment, productivity, and employment in legal economic activities to help Bolivia transform its economy into a less coca-dependent one. A project evaluation found that the project improved product quality and handling, provided export incentives, and facilitated market identification and penetration. The project increased the area of legal crops under cultivation, crop yields, and crop exports—for example, banana cultivation increased from about 10,800 hectares in 1993 to more than 14,100 hectares in 1996, and annual banana yields were estimated to have increased several times; achieved Bolivia’s first exports of fresh produce—about 3,000 cartons of bananas were exported to Argentina weekly—and total exports increased a reported 564 percent; and reportedly increased annual family income derived from project-supported crops from $280 in 1993 to $520 in 1996. The project also increased the presence of nongovernmental organizations working in the Chapare, provided training to farmers, further developed the region’s economic and social infrastructure, and encouraged private sector investment. Notwithstanding these efforts, the total hectares under coca cultivation in Bolivia decreased by less than 5 percent during the life of this project. A United Nations Drug Control Program official said that such projects did not result in significant net reductions in coca cultivation because the projects were not linked with a requirement to eradicate coca. In addition to USAID’s efforts, between 1987 and 1998, the U.S. embassy’s Narcotics Affairs Section funded a Bolivian program to pay individuals cash for not growing coca. The Bolivian National Directorate for Agricultural Reconversion paid $2,000 per hectare to peasant farmers who voluntarily reduced their coca plantings. The directorate’s operating costs and the compensation paid to farmers came from U.S. cash transfers to Bolivia. U.S. officials in Bolivia estimated that the Bolivian government spent the equivalent of approximately $100 million. U.S. officials told us the program was poorly implemented and failed to produce net coca reductions. USAID officials told us that individuals were paid to not grow coca in particular areas, but they continued to cultivate coca in other areas, thus defeating the purpose of the program. In addition, two U.S. audits by the USAID inspector general found several material weaknesses in the program’s management, including inadequate verification procedures and ineligible beneficiaries. USAID’s current alternative development project in Bolivia focuses on the Bolivian government’s forced eradication policies and has had greater success than its predecessors. However, future government policy is uncertain and could pose a threat to the project’s progress. The Counternarcotics Consolidation of Alternative Development Efforts Project, USAID’s fourth and latest alternative development project in Bolivia, started in 1998 as a 5-year effort, but USAID is planning to extend the project until 2005. Project funding, currently planned at $112 million, is expected to increase. USAID designed the project to support the Bolivian government’s goal of forcibly eradicating all illegal coca in Bolivia by the end of 2002. Primarily, the project provides assistance to communities that have signed and abided by voluntary eradication agreements with the Bolivian government. In 1998 and 1999, the Bolivian government undertook an aggressive coca eradication campaign in the Chapare, which facilitated progress in alternative development. The Bolivian government eradication program reduced coca cultivation by 33 percent nationwide and reduced coca cultivation in the Chapare more than 90 percent by the end of 2000. The Bolivian government’s forced eradication campaign has encouraged many former coca growers to seek alternative economic opportunities through the current USAID project. According to State reports, as of September 30, 2000, the volume of licit alternative development production leaving the Chapare totaled $67.3 million, a 15 percent increase above calendar year 1999’s total of $58.2 million. The number of domestic agribusinesses purchasing Chapare crops or supplying agricultural inputs increased from 46 to 67. During 2000, the project gave 6,500 families technical and marketing assistance, up from 2,554 families in 1997. According to the March 2001 quarterly report on project performance, the aggregate market value of coca leaf production in the Chapare was approximately $20 million, compared to the value of alternative crops, which contributed approximately $85 million to the Bolivian economy. While achievements in the Chapare under USAID’s current project have been considerable, U.S. and Bolivian officials have expressed concern that progress in alternative development may be threatened if the Bolivian government does not support continued eradication of illicit coca. According to State officials, the Bolivian government’s governing coalition is now politically weak, and the future of the government’s eradication policy is uncertain. Bolivia’s vice president told us it would be politically impossible for the administration to repeat the 1998–1999 forced eradication campaign and that the government could be that aggressive only at the beginning of its administration. Bolivia’s vice minister for alternative development told us that the weakening of the governing coalition and the upcoming national elections have politicized eradication. Individual members have moved to negotiate the government’s eradication policy with coca producers, which he said has caused serious damage to ongoing eradication efforts. According to the vice minister, the alternative development program will suffer if coca eradication is seen as negotiable and avoidable. USAID officials have identified lessons from the agency’s alternative development program in Bolivia, many of which are relevant to USAID’s alternative development program in Colombia. For example, USAID learned that program success depended on government control of, and security in, the project area; commitment of the Bolivian government to eradicate illicit coca; and coordination with interdiction and eradication efforts. In addition, it learned that other factors, including a market- oriented strategy, beneficiary attitudes, coordinated public relations campaigns, and U.S. support for Bolivian government agencies have contributed to program progress in Bolivia. USAID’s alternative development projects in Bolivia were limited by the lack of government control of the project site and insecurity from continued social strife. Early project documents describe the Chapare as a high-risk atmosphere, noting that during much of the first project (the Chapare Regional Development Program, 1983–1992), substantial areas of the Chapare were not accessible to the Bolivian government or project personnel for security reasons. The Bolivian government lost control of the project area between 1983 and 1986, and as a result, USAID redirected its alternative development efforts away from the Chapare for several years. A 1990 project evaluation reported that cocaine traffickers effectively ran the Chapare as a free-trade, free-fire zone for several years. After the Bolivian government regained control over the Chapare, USAID resumed activities there. However, in recent years, coca union members and other groups protesting government policies have blockaded roads through the project site, preventing alternative crops from reaching markets and jeopardizing much-needed private sector investment. According to USAID, these disturbances in the project area continue to adversely affect its alternative development strategy. The coca producers’ political party is well organized, and coca union members have threatened alternative development project staff and participants. During blockades of the main Chapare access road in 2000, coca union members threatened to burn the alternative crops of association members. USAID’s current Counternarcotics Consolidation of Alternative Development Efforts Project remains vulnerable to such social strife. According to an October 2000 cable from the USAID mission in Bolivia, social strife resulted in losses to Chapare producers of about $3 million. Technical assistance was suspended and licit crop planting was delayed. Market linkages with Argentina and Chile, which had been difficult to establish, were damaged by Chapare producers’ inability to comply with delivery contracts. Financial institutions considering investing or providing services in the Chapare backed out. The mission reported that the conditions required to achieve alternative development interim objectives had been seriously affected by the road blockages and civil unrest. USAID officials told us that without security and stability, it is unlikely that the Chapare can achieve any degree of self-sustainability. Little progress was made in Bolivia until the host government undertook the aggressive eradication that has facilitated USAID’s current alternative development project. According to State, by the end of 2000, only 600 hectares of land under illicit coca cultivation remained in the Chapare, rendering the area a commercially insignificant source of illicit coca. As a result of the 1998–1999 eradication campaigns, more former coca growers are turning to alternative development activities to earn their living, and the value of licit crops under cultivation has increased significantly. The number of currently anticipated beneficiaries, about 7,300, is more than double original estimates. According to State, USAID, and other U.S. government officials in Bolivia, alternative development, narcotics and precursor chemicals interdiction, and illicit crop eradication are interdependent and mutually reinforcing components of a successful counternarcotics strategy. Interdiction and eradication are shorter term in nature; alternative development efforts take longer to implement and show results, but they are important to sustaining gains made by interdiction and eradication. USAID’s alternative development projects in Bolivia have been hindered by poor coordination between the U.S.-supported eradication effort and USAID’s alternative development efforts. For example, a former USAID official told us that State and Drug Enforcement Administration officials often did not share information about the U.S.-supported counternarcotics operations with USAID. A 1991 USAID study on coca production in the Chapare concluded that there was too little coordination at both the policy and operations level among agencies charged with interdiction and development. The director of Bolivia’s Alternative Development Regional Program, the USAID counterpart for alternative development, told us that until the end of 1996, USAID and Bolivian alternative development agencies worked almost entirely apart from U.S. and Bolivian counternarcotics enforcement agencies. More recently, the rapid pace of the Bolivian government’s eradication campaign has created gaps between eradication and alternative development assistance that can leave peasant farmers without livelihoods. The Bolivian plan has been to remove itself from the coca- cocaine business by 2002. According to a U.S. embassy official in Bolivia, the schedule for the eradication process was compressed because the current government wanted to complete the effort before the 2002 presidential election. As a result, coordination between eradication and alternative development became very difficult. According to an embassy official, the Bolivian government was eradicating 1,000 hectares a month at the peak of the 1998–1999 operation and there was no way alternative development could quickly replace the eradicated coca with crops. Accelerated eradication stressed the current project by dramatically multiplying the anticipated beneficiaries—from 3,500 to 7,300 people. State reported that the aggressive eradication program outpaced the alternative development program by a wide margin and that the Bolivian government would accelerate the alternative development project in the Chapare in an effort to close the gap. A combination of other factors have contributed to the recent successes in Bolivia. These factors are less universal than those cited above, but nevertheless USAID officials in Bolivia noted them as lessons learned for future alternative development projects. According to USAID officials in Bolivia, community self-policing, or “peer pressure,” has not been a reliable mechanism for enforcing voluntary eradication agreements. After the Bolivian government’s cash compensation program was phased out, the government began providing infrastructure and other in-kind compensation to communities that abandoned coca. However, USAID officials question assumptions that legal producer associations can prevent all of their members from producing coca. USAID officials disagree with the basic peer-pressure premise of the program because there are no long-standing, close-knit communities in the Chapare but rather loosely associated settlements. In the summer of 1999, according to USAID officials, 65 percent of the communities participating in the government’s voluntary eradication program were found to have some members who violated their coca eradication agreements, thus disqualifying the community from receiving government assistance. Suspension of assistance to an entire community or farmer group because a few members broke their compensation agreements has been counterproductive, according to USAID, because it hinders implementation of legal crop production and marketing activities and weakens alternative producer associations. USAID officials in Bolivia found that alternative development projects were needed to incorporate market-oriented strategies and overcome numerous business-related challenges to provide economic benefits for participants. For example, although the success of the Chapare Regional Development Project—USAID’s first alternative development project in Bolivia—depended on the economic viability of alternative crops adopted by farmers, a 1990 evaluation found that no studies of the markets for the proposed crop substitutes had been conducted. The evaluation also found that the Bolivian private sector was weak and cautious and did not fulfill the role envisioned by USAID in the project design. Furthermore, constraints on credit access were severe. The evaluation found that although coca production in the Chapare produced a huge inflow of cash, small farmers did not translate coca income into savings or productive assets. The subsequent alternative development project in Cochabamba was more market focused, but it also faced serious business challenges. For example, the most serious challenge in establishing export markets for project-supported crops was the inadequate volume and quality of the produce and difficulty shipping it quickly and delivering it in good condition on a consistent basis. The Consolidation of Alternative Development Efforts Project, which started in 1998 and is still under way, is almost entirely focused on leveraging the market-oriented activities of predecessor projects and improving farmer productivity, stimulating private sector investment, and facilitating market access. It also faces numerous business challenges, such as extremely poor road connections to domestic markets and markets in neighboring countries, a lack of refrigerated cargo trucks, and poor access to credit for Chapare farmers and entrepreneurs. U.S. officials emphasized the importance of an effective public relations campaign for counternarcotics programs and alternative development in particular. In Bolivia, the U.S. embassy helped build a public consensus that production of coca and cocaine was a matter of Bolivian national interest, that the cocaine consumed by Bolivians came from the Chapare, and that narcotics trafficking was retarding the country’s economic development. U.S. officials told us that public support was a necessary precondition for the Bolivian government’s campaign of accelerated, forced eradication. U.S. embassy officials recently concluded, however, that U.S. support of the Bolivian government’s public relations effort has overemphasized the cities and neglected the actual project area. To counter pro-coca and antialternative development propaganda in the Chapare, the U.S. embassy public affairs section has begun an outreach effort to radio stations there. The United States has provided alternative development assistance to Peru for nearly two decades, but little progress was made until the Peruvian government controlled the project areas and demonstrated a strong commitment to a broader set of counternarcotics measures. The U.S. Agency for International Development (USAID) has funded two alternative development projects in Peru since 1981. The first alternative development project—the Upper Huallaga Area Development (UHAD) project—took place between 1981 and 1994 at a cost of about $31 million. This project was designed to increase and diversify agricultural production in the coca- growing Upper Huallaga River Valley through agricultural assistance for alternative legal crops and improvements in roads and health and community services (see fig. 3). However, severe security constraints and a lack of marketing assistance limited its successes, and coca cultivation increased during the project’s lifetime. In contrast, several factors—particularly a strong Peruvian government commitment to counternarcotics and improvements in security and civil governance—have contributed to better results for the Alternative Development Program (ADP), USAID’s second and current alternative development project in Peru. This project, which began in 1995 and is currently estimated to cost about $195 million, has contributed to a 70 percent decline in hectares under coca cultivation. However, political uncertainty in Peru, as well as other issues, may affect future program accomplishments and sustainability. USAID officials have identified various lessons learned from the two alternative development projects in Peru, many of which are relevant to USAID’s alternative development program in Colombia. For example, the success of alternative development in Peru depended on security in program areas, the political commitment of the Peruvian government, and coordination with eradication and interdiction efforts. USAID implemented the UHAD project between August 1981 and June 1994 at a cost of approximately $31.2 million. The U.S. and Peruvian governments developed the UHAD project as part of a joint counternarcotics strategy that called for coordination among interdiction, eradication, and alternative development efforts. The UHAD project was intended to support the government of Peru’s alternative development objectives in the Huallaga Valley by strengthening local government and community participation in the alternative development process, improving the physical and social infrastructure, and promoting agricultural activities that would replace illicit crops. USAID originally limited project operations to the Upper Huallaga area, a high-jungle valley along the Huallaga River in the north-central part of Peru, but later expanded the program to the Central Huallaga Valley as well. The Special Project for the Alta (or Upper) Huallaga (PEAH), an entity of the Peruvian government, implemented the UHAD project with USAID support. During much of the UHAD project, armed subversive organizations—the Shining Path and the Tupac Amaru Revolutionary Movement—terrorized Peruvians and attacked national and local government and civilian and military targets, particularly in rural areas. Narcotics traffickers also contributed to the violence. By 1986, PEAH had become the sole Peruvian government entity remaining in the Upper Huallaga Valley because of the deteriorating security situation. As a result, USAID severely reduced planned activities in the project’s agricultural production component, including agricultural research, training, credit extension, and land titling. The project’s focus on crop substitution and a lack of technical and marketing assistance for the alternative crops further limited the success of its agricultural component. USAID made some progress in the UHAD project’s infrastructure component, with PEAH upgrading 765 kilometers of highways and 582 kilometers of access roads that helped reduce travel and transportation costs and connected farmers to buyers of the area’s agricultural products. However, terrorist activities prevented USAID from completing a major highway that was intended to connect project sites with Lima area markets as well as other infrastructure projects. Furthermore, the terrorists generally controlled the farmers’ access roads. The community development component was, at times, the only functioning element of the UHAD project. Activities fostering local participation in the design and execution of small social infrastructure projects proved successful by exposing communities to democratic principles and requiring them to contribute financially to projects from which they benefited. However, this component required the tacit approval of the terrorists. PEAH was unable to develop good working relationships with the public agencies, local governments, and community- based organizations involved, and security problems resulted in the abrogation of many agreements between PEAH and these entities. In the end, evaluations of the UHAD project cited the lack of coordination between the Peruvian government’s alternative development and eradication activities, as well as limited markets for the alternative crops that the project promoted, as factors that limited the project’s success. During the early years of the project (1981 to 1986), hectares under coca cultivation in the Upper Huallaga Valley increased fivefold from 12,000 hectares to 60,000 hectares. By 1990, these areas increased further to an estimated 70,000 to 90,000 hectares. Using lessons learned from the UHAD project, USAID initiated the current ADP in 1995. ADP is more comprehensive than the UHAD project in terms of geographic coverage and program components. ADP seeks to improve employment and income opportunities from legal economic activities, access to basic social services, public participation in decision making, and public awareness of the problems from drug use and production in five coca-growing river valleys in Peru. The return of government control, security, and civil governance in program areas, as well as the Peruvian government’s strong commitment to interdiction and eradication, have proved crucial in creating an environment conducive to alternative development. As of January 2001, USAID had spent $84.5 million for ADP; USAID estimates that ADP funding through 2003 will reach $194.5 million. ADP has made considerable progress in meeting its objectives and has contributed to significant drops in coca cultivation in Peru. The project’s strategy is based on the hypothesis that the majority of residents in coca cultivation zones will voluntarily abandon coca if they are offered alternative licit sources of income, along with improved living conditions for their communities, and if narcotics trafficking is disrupted and laws are enforced. ADP emphasizes licit economic activities, local government strengthening, and economic and social infrastructure. The component involving licit economic activities offers assistance in the production, processing, and marketing of alternative licit crops; credit programs; and land titling programs. USAID has focused these activities on the rehabilitation of coffee and cacao cultivation because of their established markets and farmer familiarity with these crops. In its local government component, the project promotes efforts to strengthen local governments, increase public participation in decision making, raise social awareness of drug production and use, and develop communities. Finally, ADP includes activities to improve the economic infrastructure—for example, the rehabilitation and maintenance of roads and bridges and the provision of social services in program areas. Improved roads and bridges are intended to create a viable transportation network for licit economic activities, while social infrastructure components involve local communities in the selection, design, financing, construction, and maintenance of small infrastructure projects such as schools, potable water systems, health posts, and minihydroelectric systems. With the return of government control, security, and civil governance in program areas in the early 1990s, as well as the Peruvian government’s strong commitment to interdiction and eradication, ADP has been able to accomplish considerably more of its objectives than the earlier UHAD project. In conjunction with eradication and interdiction efforts, ADP contributed to a 70 percent net decrease in hectares under coca cultivation in Peru from 1995 (115,300 hectares) through 2000 (34,200 hectares). According to USAID, those areas receiving greater project investment witnessed greater voluntary abandonment of coca cultivation, as well as fewer plantings of new coca crops. During 1995-2000, ADP provided production and marketing support to more than 15,000 farmers growing nearly 32,000 hectares of licit crops, particularly coffee and cacao, according to USAID officials. During that period, more than 236 metric tons of licit crops, with a gross value exceeding $46 million, were produced in program areas. The project established a $10 million rural credit system, provided training in governance skills, and strengthened two municipal associations. Through its economic and social infrastructure component, ADP has rehabilitated 1,000 kilometers of roads and 46 bridges, stone-paved 21 kilometers of roads, supported 136 engineering studies, piloted 1 regional maintenance program, and provided 3 pools of heavy equipment. In addition, the project has supported about 1,000 small social infrastructure projects involving schools, potable water systems, health posts, minihydroelectric systems, and other community improvements. As a result, the percentage of households with access to basic services in program areas increased from 16 percent to 51 percent. Finally, according to USAID, the percentage of the population recognizing drug production and consumption as damaging to society reached 94 percent. USAID officials identified various lessons learned from the UHAD project and ADP, many of which may apply to USAID’s alternative development program in Colombia. For example, the success of its alternative development program in Peru depended on government control over and security on the project sites, the political commitment of the Peruvian government, and coordination with interdiction and eradication efforts. Other factors that affected alternative development in Peru included a system for verifying compliance with eradication agreements, a market- oriented program design, national consensus on the harm caused by drug production and consumption, and a viable road network. Lack of government control and security severely limited program implementation and accomplishments in the UHAD project by causing program implementers—agricultural advisers, researchers, and financial institutions—to withdraw and residents to flee from project areas. Terrorists murdered several land surveyors, mayors, and residents, thereby halting many of the project’s activities. At one point, PEAH was the only Peruvian government institution in the Upper Huallaga Valley, after other government and private sector entities left due to the deteriorating security situation. In designing ADP, USAID officials acknowledged that ensuring security by reducing the presence of subversive and narcotics trafficker elements was a critical precondition for alternative development in program areas. Insecure areas were excluded from the program. The Peruvian government’s success in combating terrorist groups and narcotics traffickers in the mid-1990s created a more secure and amenable environment for alternative development. The return of civil governance in program areas allowed USAID-supported activities to resume. As the UHAD project was ending in the early 1990s, prospects for the success of alternative development in Peru were considered bleak, despite years of U.S. assistance. Coca cultivation had increased significantly during the 1980s. However, this changed when the Peruvian government committed to a strong counternarcotics agenda. In particular, the Peruvian Air Force conducted an aggressive interdiction campaign in which it shot down airplanes presumed to be involved in narcotics trafficking. This campaign disrupted the coca market, thereby encouraging coca growers to turn to alternative development programs. By targeting narcotics traffickers, rather than coca growers, the Peruvian government also limited resentment from farmers over the counternarcotics campaign, according to USAID and Peruvian officials. Recent political turmoil has created uncertainty about the future direction of the Peruvian government’s counternarcotics policies and may affect future program accomplishments and sustainability. Peru’s transitional government (November 2000 to July 2001) invited leaders of coca-growing syndicates to participate in formal roundtable policy discussions, raising concern among USAID officials and some ADP implementers that this would impart legitimacy to the syndicates and raise their political profile. In addition, Peru’s current administration, which came to office on July 28, 2001, is still developing its national counternarcotics policy. State and USAID officials in Peru emphasized that an effective counternarcotics strategy requires sustained interdiction, eradication, and alternative development. Interdiction and eradication disrupt the coca market, thereby creating market uncertainty and lowering prices for coca while encouraging coca farmers to consider alternative development programs. The three efforts are also complementary, but alternative development programs require longer timetables to achieve results than interdiction or eradication efforts. The cultivation and commercialization of alternative crops, development of community organizations, and improvement of social and economic infrastructures can take years to accomplish, but they have longer-lasting impacts on reducing coca cultivation. Department of State and USAID officials in Peru emphasized that coordination between eradication and alternative development is particularly important to ensure that eradication efforts do not interfere with alternative development activities and that families dependent on coca for their livelihood receive short-term emergency assistance after an eradication campaign. According to USAID officials, the Peruvian government has conducted some coca eradication campaigns in the past without coordinating these actions with USAID, thereby jeopardizing ADP activities. Such forced eradication campaigns can cause problems for ADP by creating resentment among community residents. In the earlier UHAD project, resentment against eradication efforts worsened security concerns by alienating farmers, which encouraged them to seek “protection” from terrorist groups. As in Bolivia, a combination of other factors has affected progress in Peru. The factors cited by USAID officials in Peru are similar, but not identical, to those cited by officials in Bolivia and are useful as lessons learned for future alternative development projects. According to USAID and embassy officials in Peru, although the United States has monitored overall trends in coca reduction in Peru, there is currently no way to verify whether specific Peruvian communities participating in voluntary eradication agreements are actually complying with the agreements. In September 2001, State’s inspector general found that monitoring efforts were not specific enough to establish an adequate link between investments in alternative development and coca reductions. Embassy officials, with input from USAID, are developing a monitoring system that addresses this concern. One component of the system the embassy is considering would involve a requirement for the Peruvian government to provide proof of compliance with eradication agreements before it could draw future alternative development funds. The system would likely employ the Peruvian Interior Ministry in plotting the relevant areas of farmland and monitoring the corresponding eradication efforts there. The United States would then verify the Peruvian government’s monitoring efforts from the air. Under the UHAD project, USAID emphasized agricultural production of certain crops. However, USAID did not conduct analyses or develop program strategies that fully considered the marketability of these particular crops. Without markets for the alternative crops they grew under the UHAD project, farmers derived little economic benefit from their efforts and investments. Based on this experience, USAID included a stronger market focus in the follow-on project. ADP originally focused on promoting the rehabilitation of key crops—coffee and cacao—that had proven markets and that farmers traditionally cultivated, but then abandoned, in program areas. However, historically low market prices for these commodities have limited the economic benefits to farm families. ADP is now promoting economic diversification—the cultivation of multiple crops and raising of small farm animals—to stabilize the financial income and nutritional needs of farm families, while still promoting the cultivation of traditional crops (for example, coffee and cacao) whose prices are subject to market fluctuations. USAID also is emphasizing the need to develop niche markets for alternative development products and to involve the private sector under ADP. For example, USAID has successfully marketed coffee and cacao grown under ADP to Seattle’s Best Coffee and M&M Mars Company. U.S. and Peruvian officials acknowledged that, in the past, Peruvians considered coca cultivation, drug production, and narcotics trafficking to be U.S. rather than Peruvian problems. Consequently, the Peruvian public demonstrated relatively limited support for U.S.-supported counternarcotics efforts, including alternative development. However, the Peruvian public attitude toward drug production and trafficking changed as a result of the terrorism, violence, and social disruption caused by subversive groups—who were supported by narcotics traffickers—during the 1980s and early 1990s. With public support, the Peruvian government mounted aggressive counternarcotics and counterterrorist campaigns, while minimizing public opposition and resentment against these efforts by targeting narcotics traffickers rather than the coca farmers. Public support at a community level has also helped. According to USAID officials, the involvement of beneficiaries, local community groups, and municipalities in its alternative development programs was necessary to promote sustainability. Communities have a greater incentive to embrace and sustain alternative development activities if they are involved in the design, implementation, and funding of projects that raise the quality of life in their communities. Both the UHAD project and ADP included social infrastructure activities in which communities benefited from and contributed to alternative development-supported schools, water systems, and health posts. ADP, in particular, has promoted the development and strengthening of regional and local community groups such as municipal associations, producer associations, and credit groups to encourage local communities to take ownership of their projects and expose them to the democratic process. According to USAID, strengthening local organizations is particularly important in Peru because of the national government’s highly centralized decision making and resource allocation processes. Under ADP, USAID requires local communities to prioritize their social service needs and contribute both financial and labor resources to the projects they choose. USAID also helped coffee and cacao farmers develop producer associations to assist them in marketing their crops. U.S. and Peruvian officials acknowledged that a viable rural road network is a precondition that encourages farmers to consider alternative economic activity and reduce their illicit crops voluntarily. Good roads allow farmers to obtain higher prices for their alternative crops by linking them to higher-paying nonlocal markets and by reducing transportation costs. In contrast, farmers can market coca leaves without roads by carrying coca leaves or coca paste out of their valleys or by having narcotics traffickers pick up the products from farms by airplane. Under the UHAD project, USAID had supported the completion of roads that would have linked Upper Huallaga Valley farmers to lucrative markets in Lima. However, a lack of security prevented their completion. Under ADP, USAID is supporting the rehabilitation and upgrading of important secondary rural roads and bridges in program areas. In some cases, USAID is supporting cobblestone paving of dirt roads, which also generates local employment in program areas. USAID is also supporting the formation of community-based road maintenance microenterprises. In addition to the individual named above, Dave Artadi, Mike Courts, Christian Hougen, Jason Venner, and Janey Cohen made key contributions to this report.
Since the early 1970's, the U.S. Agency for International Development (USAID) has helped Bolivian and Peruvian growers of illicit crops find legal ways to earn a living. The experiences in Bolivia and Peru indicate that effective alternative development demands a strong host government commitment to a comprehensive array of counternarcotics measures and years of sustained U.S. assistance. Chief among the specific lessons for Colombia are that progress requires host government control of drug-growing areas and a political will to interdict drug trafficking and forcibly eradicate illicit crops as well as a carefully coordinated approach to these efforts. USAID began targeting Colombia's poppy-growing areas in 2000 and expanded its program to include coca-growing areas in 2001, but most activities will not begin in earnest until 2002. The experiences in Bolivia and Peru suggest that alternative development in Colombia will not be successful unless the Colombian government controls coca-growing areas, has the capacity to carry out sustained interdiction operations, and the ability to effectively coordinate eradication and alternative development activities.
The Social Security program is the foundation of the nation’s retirement income system. Since 1940, Social Security has been providing benefits to the nation’s eligible retired workers and their dependents. In addition to retired worker benefits, Social Security also provides protection for covered workers with severe disabilities and their dependents. Also, spouses and children of deceased workers may receive Social Security survivor benefits. The program is financed largely on a pay-as-you-go basis, with payroll taxes from today’s workers paying the benefits of today’s beneficiaries. Demographic trends indicate that the Social Security program will begin to experience a long-term financing problem after about 2013, when benefit payments will start exceeding cash revenues. The aging baby boom generation will be followed by a relatively smaller work force that will have to support a relatively larger group of retirees. This trend, combined with the increasing longevity of the elderly, will significantly drive up the costs of maintaining the program. Without action to raise program revenues or cut program spending, the Social Security Trust Funds will be exhausted by 2032. Proposals being considered for resolving the future solvency problem range from making adjustments to the tax and benefit structure of the current program to introducing features such as individual accounts that could substantially alter the existing program structure. Despite these differences, policymakers and Social Security experts agree that taking action soon is desirable to alleviate impacts on workers and beneficiaries. About 44 million people receive Social Security benefits today, and about 147 million covered workers pay Social Security payroll taxes. More than 40 percent of the cash income of those aged 65 and older comes from Social Security benefits, and over 60 percent of this population receives at least half their income from Social Security benefits. For 15 percent of this population, Social Security benefits are the only source of cash income. The Social Security program is one reason that poverty rates among the nation’s elderly have fallen dramatically—an estimated 39 percentage points since 1935. Social Security revenues come from three main sources: (1) payroll taxes of 12.4 percent on covered earnings (up to $68,400 in 1998) split equally between employees and their employers and paid in full by the self-employed, (2) income taxes on up to one-half an individual’s or couple’s Social Security benefits when total income exceeds certain thresholds, and (3) interest earnings on U.S. Treasury securities held by the Trust Funds. Program revenues in 1997 totaled $457.7 billion, of which almost 90 percent came from payroll taxes, about 1.7 percent from the income taxation of Social Security benefits, and 10 percent from interest on the Trust Funds’ assets. The share coming from the income taxation of benefits is expected to grow because the income thresholds at which benefits become taxable are not indexed. The portion coming from interest on the Trust Funds will increase until about 2020 and then fall dramatically as the Trust Funds redeem securities to help pay benefits. Social Security’s benefit structure has evolved and expanded considerably over time. Under the original 1935 Social Security Act, only retired workers meeting specified conditions were eligible for monthly benefits. Benefits under the original act had a strong “individual equity” component—that is, individual benefits were positively related to lifetime earnings. Benefits also contained a “social adequacy” component—that is, they were proportionately larger, but absolutely smaller, for those with relatively low lifetime earnings. Currently, benefits are calculated using the 35 years of highest earnings, not total lifetime earnings, and benefits are provided to workers’ spouses, children, and survivors, who may not have worked for pay. These changes improved the social adequacy component of the benefit structure. The appropriate balance between individual equity and social adequacy is a fundamental issue surrounding Social Security’s benefit structure and reflects the extent to which the program redistributes income among workers and beneficiaries. Social Security was originally designed to provide benefits only to retired workers. Major expansions were made to the program in 1939, when the Congress provided “auxiliary” benefits for workers’ eligible wives, children, and survivors. In 1956, it provided benefits for disabled workers and their eligible dependents. Other amendments to the act have extended benefits to husbands, widowers, divorced spouses, and mothers and fathers (spouses under age 65 with benefit-eligible children in their care). Some beneficiaries are eligible to receive retired worker benefits on the basis of their own work record and are also eligible to receive a higher benefit on the basis of their current or former spouse’s work record. Essentially, these beneficiaries, who are called “dually entitled,” receive their own retired worker benefit and the difference between that and the higher auxiliary benefit. Table 1.1 shows the current benefit categories and the number of beneficiaries in each category. Calculating Social Security benefits is a three-step process. First, a worker’s covered earnings over his or her 35 years of highest earnings are identified. Social Security uses average indexed monthly earnings (AIME) as its measure of these “lifetime” covered earnings. Second, a progressive benefit formula is applied to these lifetime covered earnings to determine the benefit that will be payable to the worker at the normal retirement age (NRA), currently age 65. This NRA benefit, or primary insurance amount (PIA), is the basic amount used to determine the actual benefit for those receiving benefits on the basis of a worker’s earnings record. Finally, the benefit is adjusted for the age at which the beneficiary first receives the benefit. Auxiliary benefits are based on the worker’s PIA. The benefits for dually entitled people are based on their own PIAs. If the spouse or widow(er)’s benefit is higher, the dually entitled person’s benefit is supplemented to raise it to the amount of the spouse or widow(er)’s benefit. Currently, automatic benefit indexing provisions generally increase the worker’s PIA by an annual COLA. The COLA is equal to the rise in the consumer price index over a congressionally established period of a year. Indexing allows Social Security benefits to maintain the same purchasing power over the beneficiary’s retirement. Retirement income from most other sources is not fully indexed and thus tends to decline in real terms over time. Social Security is financed largely on a pay-as-you-go basis. Under this type of financing structure, the payroll tax revenues collected from today’s workers are used to pay the benefits of today’s beneficiaries. Under a strict pay-as-you-go financing system, any excess of revenues over expenditures is credited to the program’s trust funds, which function as a contingency reserve. Social Security’s Trust Funds reserve allows the government to manage the inevitable differences over time between revenues and expenditures. One reason the pay-as-you-go approach was initially used is that it required relatively small contributions at a time when the program was young and benefit payments were small. However, this structure required increasing contribution levels as the program matured and more beneficiaries with higher average benefits were added to the beneficiary rolls. In addition, the pay-as-you-go structure leaves the program and the federal government susceptible to financing problems when costs increase more than expected or revenues fail to meet expected levels, such as might occur with changing short-term economic conditions. Every year, Social Security’s Board of Trustees estimates the financial status of the program for the next 75 years using three sets of economic and demographic assumptions about the future. According to the intermediate set of these assumptions, the nation’s Social Security program will face solvency problems in the years ahead unless corrective actions are taken. The Social Security program is not in long-term actuarial balance. That is, Social Security revenues are not expected to be sufficient to pay all benefit obligations from 1998 to 2072. Without changing the current program, excess cash revenues from payroll and income taxes are expected to begin to decline substantially around 2008. By 2013—15 years from now—these cash revenues will be insufficient to pay all program costs. After 2013, Social Security will have to start redeeming some of its assets to obtain the cash needed to pay benefits. The Trust Funds are expected to be exhausted in 2032. The anticipated revenue shortfall over the next 75 years is estimated at $3 trillion, or an average annual shortfall of $40 billion (in 1997 dollars). This $3 trillion shortfall is based on the assumption that the Social Security program will continue under its current structure. That is, new workers will enter the system, pay payroll taxes (which will be matched by their employers), accrue benefit credits while working, and receive benefits when they retire. Even if revenue or expenditure adjustments necessary to reach 75-year balance were achieved, the financing problem still might not be permanently resolved. For the foreseeable future, each new 75-year projection period will have a higher long-term financing shortfall than the last. For example, suppose the payroll tax was raised sufficiently to reach balance, and the current actuarial assumptions were realized for the period 1998 through 2072. Under this scenario, the Trust Funds would have only about 1 year’s worth of benefits remaining in 2072. If the same actuarial assumptions continued to be used in each of the years between 1998 and 2072, the Trust Funds would continue to be expected to be exhausted shortly after 2072, but, beginning in 1999, the 75-year projections would show a long-term revenue shortfall for the program that would grow over time. The program has another, higher revenue shortfall estimate—about $9 trillion, as of October 1, 1997. This is the amount of the program’s unfunded benefit obligations—the accrued future benefit obligations that will not be able to be paid with assets currently in hand. A large unfunded liability in a government program financed primarily on a pay-as-you-go basis is generally not considered a problem because of the government’s authority to tax current workers to pay current benefits. Thus, current unfunded liabilities are passed onto future generations. However, if the current Social Security program were ended or changed to an advance funded system, all $9 trillion of accrued benefit obligations would have to be paid if the government honored these obligations in full. Social Security does not face an immediate financing crisis because its cash revenues are expected to exceed its expenditures until 2013. However, the substantial size of the anticipated 75-year shortfall ($3 trillion if the program remains a pay-as-you-go system and $9 trillion if it is terminated or becomes a system that is funded in advance) suggests the need for reform action in the near future. Social Security is currently building up some Trust Funds reserves, which can help offset some of the revenue shortfall after 2013. Interest earnings on and redemption of these reserves, along with payroll and income tax revenues, are expected to provide sufficient resources, under the Trustees’ 1998 intermediate assumptions, to pay program obligations until about 2032. Without action to improve the system’s financial outlook, the program is expected to have revenues sufficient to cover only about 75 percent of anticipated benefit obligations in 2032, and this will decline to about 68 percent by 2072. An important factor affecting Social Security’s pending financing problem is the rapidly approaching retirement of the baby boom generation. The oldest of this generation will reach early retirement age (62) in 2008, and the youngest will reach it in 2026. This large number of retirees would substantially increase program costs and strain the ability of the program to pay benefits even if it were the only factor affecting future costs. (See fig. 1.1.) Exacerbating the problem of the retirement of the baby boom generation is the relatively smaller generation that follows it. The post-baby-boom generation, which resulted from the rapid decline in fertility rates from the mid-1960s to the mid-1980s (see fig. 1.2), will result in relatively fewer workers to support a larger number of retirees. The number of workers whose payroll taxes will support those on Social Security will fall from today’s about 3.4 per beneficiary to an anticipated 2.0 per beneficiary in 2030. Another factor that will raise program costs is the increase in life expectancies. Life expectancy for 65-year-old men increased from 11.9 years at the program’s inception to 15.3 years in 1995 and for 65-year-old women, from 13.4 years to 19.0 years. Life expectancies are expected to continue to increase to 18.7 years for men and 22.0 years for women in 2070. This increase will further strain the program’s financing, requiring revenue increases or benefit cuts to keep the program solvent. Other factors, primarily economic and behavioral aspects of Social Security’s actuarial assumptions, can also affect its costs and revenues.Factors that increase program costs include the following: automatic COLAs, which maintain the real purchasing power of benefits but increase both nominal benefit levels and program costs geometrically over time; the relaxed earnings test, which allows benefit-eligible workers to receive Social Security benefits even though they have considerable earnings;and rising real wages, which increase real benefits over time. Factors that constrain program revenues include the following: an earlier average retirement age, which reduces the period during which workers pay payroll taxes;lower than expected rates of real economic growth, such as occur with recessions, which constrain the growth of covered wages and make paying taxes to support Social Security beneficiaries more onerous than if the economy had grown at a faster rate; and the growing share of total employee compensation that is not subject to payroll taxes. The crucial role Social Security plays in providing income support to the nation’s elderly and disabled populations makes the program an ongoing policy focus of the Congress and numerous nonfederal groups and organizations. In the past when financing problems have been encountered, the Congress has acted to alter the revenue and benefit provisions of the program to maintain its solvency. While the program has been modified on an ongoing basis, major legislative reforms, such as those enacted in 1977 and 1983, have been made less frequently. Because the Advisory Council could not reach consensus on how to fully restore solvency over the 75-year period, it brought forward three packages of proposals, including two that combine elements of individual accounts with other program changes, such as adjustments to the benefit formula. One of the three Advisory Council proposals—the “maintain benefits” (MB) proposal—involves mainly traditional reforms that would operate within the existing structure of the program. A second proposal would significantly change the system by creating individual retirement accounts—“personal security accounts” (PSA)—that would be privately managed and invested as directed by the individual worker. The third proposal—“individual accounts” (IA)—is essentially a hybrid of the other two proposals and includes both the creation of private individual accounts administered by the federal government and traditional-style reforms to the current program. The MB proposal would maintain most of the existing benefit structure of the program. However, since it used only traditional reforms, this proposal did not fully close the financing gap and restore actuarial balance. To achieve long-term actuarial balance, the MB group considered an option that involved investing about 40 percent of the Trust Funds’ assets in private securities, such as through stock and bond mutual funds. This approach, in essence, would have expanded the extent to which the program was advance funded. In the end, the MB group simply recommended that this Trust Funds investment option be studied further. The other two Advisory Council proposals include systems of individual accounts. The PSA proposal would divert a portion of the existing payroll tax into accounts that would be managed privately, while the remainder of the payroll tax would go to finance a public benefit that would be smaller than current benefits are for most beneficiaries. Under this plan, 5 percentage points of the employee’s share of the current OASDI tax rate would be diverted to an individual account. The accounts would be individually owned and privately managed, and individuals would choose from a variety of investments in private financial instruments. The accounts would be tax-deferred, and individuals could begin drawing from them at age 62. Any funds remaining in an account upon the death of the owner would become a part of the estate. The individual accounts would represent a second tier of benefits, with a modified version of the existing Social Security program benefits maintained as the smaller first tier. The IA option would essentially maintain the structure of the existing system, with adjustment, as a large first tier and add an individual account component as a supplemental second tier. Under this proposal, workers would be required to contribute an additional 1.6 percentage points of taxable payroll to fund the individual accounts. The accounts would be invested in private securities, and workers could choose among such investments as stock and bond mutual funds and government securities. However, these accounts would be administered largely through the existing Social Security program. The account accumulation would also be required to be annuitized through Social Security, a feature not included under the PSA plan. While these three Advisory Council options tend to dominate the current debate, numerous other proposals and options have also been advanced by various organizations, academics, and members of the Congress. For example, in the 104th Congress, proposals were advanced by Senators Kerrey and Simpson (S. 824, S. 825, and S. 2176) and Representative Nick Smith (H.R. 3758) and, in the 105th Congress, by Senator Judd Gregg (S. 321), Senator Daniel Patrick Moynihan (S. 1792), Representative Mark Sanford (H.R. 2768 and H.R. 2782), Representative John Porter (H.R. 2929), and Representative Nick Smith (H.R. 3082). Numerous other proposals have been offered recently by organizations such as the National Taxpayers’ Union Foundation and the Committee on Economic Development, as well as by various economists and analysts (see bibliography). Although the Board of Trustees has indicated the program is expected to have sufficient assets and revenues (including interest on the Trust Funds) to pay all benefit obligations for the next 3 decades with no changes to the program, most analysts believe early action to reduce the actuarial imbalance is important for a number of reasons. First, the longer action to address the program’s financing problem is delayed, the larger the per-year cost of the solution because the shortfall in revenues will still have to be addressed, but over a shorter period of time. Second, some of the possible solutions to the solvency problem—such as raising the program’s NRA, reducing benefits for future beneficiaries, or increasing the program’s advance funding—will take time to implement or phase in, once enacted. Third, if certain changes, especially those that reduce benefits, are made, workers will need time to adjust their saving and retirement goals to help mitigate the personal impacts of these changes. Thus, the sooner the changes are made, the less disruptive they are likely to be. The Chairman and Ranking Minority Member of the Senate Finance Committee asked us to discuss (1) the various perspectives that underlie the current solvency debate, (2) the reform options within the current structure, and (3) the issues that might arise if Social Security were restructured to include individual accounts. We also discuss the likely effects on national saving of reform proposals that call for more advance funding of Social Security benefits. Because of the wide-ranging nature of the numerous proposals being advanced, our report focuses on the common, or generic, elements that underlie various proposals to reform Social Security financing rather than a complete evaluation of specific proposals. In conducting this study, we reviewed literature on Social Security’s long-term financing problem and related issues as well as a number of proposals that would address this problem. We held discussions with SSA officials and with other subject matter experts from government, the policy community, and academia about these issues. We also drew on our own previous work. We obtained comments on a draft of this report from SSA and subject matter experts and made revisions as appropriate. We conducted our work between October 1996 and February 1998, in accordance with generally accepted government auditing standards. The need to ensure Social Security’s long-term solvency has sparked a debate that has roots in the program’s creation. Both at the program’s inception and today, the discussion has centered around different frameworks for providing social insurance. The many and varied proposals for addressing Social Security’s future solvency problem—including those put forth by the 1994-96 Advisory Council—reflect these fundamentally different perspectives on the appropriate structure for Social Security. As a result, these proposals range from traditional reforms of the current program to significant restructuring. Increased advance funding forms a core element of many solvency proposals. The Social Security program emerged in the 1930s as the nation sought to address hardships created by difficult economic conditions. Some historians of Social Security point out that prior to the Great Depression there was considerable resistance to involving the federal government in providing economic security and creating a federal social insurance program. Despite this view, there was also a developing realization that individual and voluntary actions were not adequate to address poverty among the elderly, and a number of state programs to assist the elderly were instituted. With the coming of the Great Depression and as various social movements gained attention, President Franklin D. Roosevelt appointed the Committee on Economic Security to devise what came to be the Social Security program. Throughout the legislative deliberations leading to passage of the Social Security Act in 1935, the theme of attaining a consensus on the balance between government and individual responsibility was prevalent. Over the years, the debate about the role of government has largely centered around three models: the social insurance model, the tax-transfer model, and the annuity-welfare model. Given the structure of the program as it emerged in the 1930s, the social insurance model (and, to a lesser degree, the tax-transfer model) has provided the most frequently used framework for analyzing the program. Some analysts, however, view the annuity-welfare model as a more appropriate approach for reform. Workers face a variety of risks arising from the loss of earnings that can result from retirement, disability, or death. Consistent with the social insurance model, Social Security represents a way for workers to pool these risks; it offers a package of benefits that can be obtained for a given price in the form of taxes. Since the risk primarily involves the loss of earnings the taxes to finance such a program are earnings-related, as are the benefits received. In general, because such a package of benefits may not easily be obtained in private markets, the government is involved in providing the benefits. In addition to this market failure rationale for involving the government in administering such a pooling of risks, related rationales include reducing uncertainty about individuals’ future retirement income; alleviating insurance market failures, such as adverse selection; addressing social concerns about income redistribution; reducing the social burden imposed by nonsavers and the short-sighted; and institutionalizing the compact between generations (filialism). In constructing the program along the lines of the social insurance model, two important—and apparently conflicting—objectives were addressed: individual equity and social adequacy. Linking benefits directly to the tax price paid, or to contributions, invokes the standard of a market return, or an “actuarially fair return,” and demonstrates the individual equity principle. But pooling risks against earnings loss also involves the concept of need or a desired minimum level of benefits. Thus, the program is designed to also embody the principle of social adequacy, which involves redistribution among participants within the program. Balancing these seemingly conflicting objectives through the political process has resulted in the design of the current Social Security program. Some analysts advocate an alternative approach for restructuring Social Security: the annuity-welfare framework. The emergence of this model is linked with the debate that took place in the 1930s and with various economic critiques that have emerged since the 1960s. The fundamental basis of this model is the view that the different components of Social Security—individual equity and social adequacy—should be addressed separately; that is, the part of Social Security that pays benefits related to contributions by workers should be separated from the part of Social Security that relates to adequacy, or maintaining a minimum level of income to alleviate poverty. This view generally leads to a rather different approach to providing retirement income. Several key points about the annuity-welfare model and its relation to Social Security are worthy of note. First, while the individual is required to participate in Social Security, the annuity-welfare model emphasizes maximizing voluntary arrangements whenever possible. Nevertheless, the annuity-welfare model generally recognizes that because some individuals may choose to “free ride” on society by not saving adequately, and others may experience conditions during their lifetime that leave them without adequate resources, a role for government involvement may be justified. Second, Social Security is not advance funded in the manner of private pensions and does not grant contractual rights to individuals as does, for example, a pension trust arrangement. Rather, the pay-as-you-go financing structure means that current workers pay for the benefits of current retirees and that benefits are promised largely on the basis of the ability of the government to pay them in the future. Third, the connection under Social Security between benefits and contributions is loose, mainly because of the redistributive nature of the system. As a result, some individuals will receive less than a market return for their contributions, which has raised concerns among proponents of the annuity-welfare model about the value provided by the program. There is an important fourth issue. Some see the existing Social Security structure as leading to further difficulties because decisions about the program and its impact on individuals are made through the political process. This is known as “political risk.” According to this view, the design of Social Security creates the potential for program expansion because there will always be political incentives to promise higher benefits, which will be paid for disproportionately by certain groups, such as high earners or future generations. In addition, higher benefits that may need to be paid for by future workers can be promised in the near term, even though the ability of the government to raise funds in the future to make good on these promises may be dependent on the political situation at the time. Proponents of the annuity-welfare model view obtaining adequate retirement income as a matter of individual responsibility and believe that this private decision should be separate from the social decision about providing an adequate or minimum level of retirement income for those who otherwise would fall into poverty in old age. Thus, under this model, the individual may have greater control, through the political process, of the level of minimum or basic income to be provided by society because he or she is not required to participate in a larger program of social insurance that is subject to legislative and political actions. The emergence of privatization and individual account plans as an element of the current Social Security financing debate can in large part be tied to the annuity-welfare model. Two key features of this framework are its emphasis on advance funding and on a more direct linkage between the contributions made to the system and the benefits received from it. While proponents of both the social insurance and annuity-welfare approaches agree that those who contribute more to the system should receive more from it, the existence of income redistribution in the current Social Security program weakens this linkage. Individual account proposals could strengthen the program’s equity goal by establishing a system in which the returns on investments would accrue to individuals themselves. In general, the frameworks discussed here reflect differences in philosophies about the appropriate balance between individual and government responsibility. While both frameworks include a role for government in providing retirement income and some mandatory contribution toward it, the degree of support to be provided through government is a major source of contention. Concerning the issue of linking benefits to contributions, supporters of the current Social Security program structure argue that redistribution is a desirable goal and a major reason for a social insurance program. They object to the separation of the individual equity and social adequacy elements, as this holds the potential, in their view, for undermining the consensus for redistribution and support of the less fortunate elderly. Further, they assert, the commitment of government under a social insurance system precludes the need for contractual arrangements and, because risks are borne collectively, reduces many of the risks that would otherwise be faced individually. Supporters of the current system also argue that a primarily pay-as-you-go system is an appropriate way to finance transfers intergenerationally. Thus, these advocates propose solutions to the financing problem that essentially maintain this structure and preserve government’s primary role. Others offer proposals that would fundamentally restructure the Social Security program to reduce the role of government and increase individuals’ returns. They particularly focus on increasing individual choice and responsibility and emphasize private market returns on contributions, such as could occur with individual account proposals. Consistent with this focus, they emphasize that it is important that government address the unfunded liabilities of Social Security, and they recommend moving toward a greater reliance on advance funding and away from the primarily pay-as-you-go approach now in use. Advance funding involves saving real assets to finance benefits promised today but paid in the future. Applying such a financing approach to the Social Security system, which is currently financed primarily on a pay-as-you-go basis, would require a period during which contributors paid twice—once for current beneficiaries and again to “advance fund” some part of their own retirement benefits. Despite this potential drawback, most proposals to reform Social Security’s financing build in some degree of advance funding, arguing that the long-term economic benefits could offset short-term costs. The ability to finance future benefit promises, regardless of the financing method chosen, depends fundamentally on the capacity to generate a given amount of resources that will be sufficient to meet future obligations. This can be done through a social insurance program wherein the government makes a political commitment—which may or may not include issuing debt—or, alternatively, through advance funding. As an element of most private pension plans, advance funding involves a contractual obligation under which real assets sufficient to meet the future payments are placed in a legal trust arrangement. In contrast, pay-as-you-go requires a political commitment to levy taxes in the future.Proposals for advance funding Social Security usually involve investing some portion of current Social Security contributions in private sector securities (stocks and corporate bonds) owned by the individual contributors. It would also be possible for the government to hold government securities or private securities, and this approach has been proposed as well. In both approaches, increasing Social Security’s advance funding has the potential to capture returns from investment of assets; these returns could help mitigate the benefit reductions or tax increases that would otherwise be necessary to restore solvency to the system. Supporters of advance funding point out that it offers a way to increase national saving, investment, and economic growth. They also assert that increased economic growth could raise both wages and the national standard of living, which would reduce the burden of setting aside a given level of income for retirement. Thus, they advocate reducing current consumption in order to increase future consumption. Others suggest that the claims of those favoring advance funding may not be realized. The linkage between national saving and economic growth is not certain. Because future market returns, inflation, and life expectancies are uncertain, there is no guarantee that a given level of contributions paid into an advance funded plan would necessarily be sufficient to provide an expected, or even an adequate, benefit that would last throughout an individual’s retirement. Also, an increase in personal or government saving from advance funding Social Security would not necessarily translate into an increase in national saving—for example, if the government used some current Social Security revenue to fund additional personal saving and then borrowed to continue paying current benefits. To achieve full advance funding, a transition period might have to occur during which workers would have to fund both their own future Social Security benefits and the benefits for those who had already earned unfunded credits under the current program. Funding the program’s currently unfunded promises through taxation could place a large burden on the first group of workers who financed their own benefits. Debt financing could reduce the burden on this group and place some of the burden on later generations that paid off the debt. The transition costs could be substantially reduced if some of the unfunded future benefit obligations were eliminated by reducing the benefits of current and future beneficiaries. Once the transition period had passed and advance funding was fully implemented, future workers would no longer need to finance the Social Security benefits of those who were currently working. Theoretically, enough money would have been set aside by workers and employers (in either individual accounts or a collective account) to secure the benefits of each worker throughout retirement—and, depending on the proposal’s design, perhaps those of his or her dependents and survivors as well. In addition, as the burden of supporting older generations decreased and investment returns funded an increasing portion of the growth in individual accounts, reducing individual account contribution rates to a level below today’s OASDI payroll tax rate would be possible. The Advisory Council has proposed three packages of options. These packages capture most of the essential features that are found in other reform proposals. While the packages include adjustments of the current structure (traditional reforms), such as increasing the retirement age, changing the benefit formula, and lowering the postretirement COLA, each also contains nontraditional reforms involving increased advance funding. Although all three of the Advisory Council proposals would increase the system’s advance funding, only the PSA and the IA options call for individual account plans. The MB proposal instead would increase the system’s advance funding within the current structure, and the government would invest at least some portion of the additional assets in the stock market. Thus, the Advisory Council has indicated that, to restore solvency, the element of advance funding in private investment markets should be increased, whether the Social Security program is strengthened within its current structure or fundamentally altered. The Advisory Council’s two individual account proposals represent what is generally referred to as the “privatization element” in the current debate. Precisely defining privatization in relation to the Social Security debate is difficult, but privatization is usually associated with two key elements: advance funding of retirement income through investment in private financial assets and greater individual control of decisions about investing those assets. The PSA and IA proposals would change the current benefit structure of Social Security. Individuals would receive part of their future benefit from a modified Social Security program and part from the accumulations from the individual account. These individual accounts would be, essentially, advance funded retirement income arrangements, as are private pensions, and would be similar to defined contribution pension plans, or 401(k) plans. These accounts would earn a return that depended solely on the investment performance of the assets held, and historical data suggest that the gross returns to these funded arrangements could be higher than the amounts beneficiaries could expect to receive under the current system. The opportunity for higher returns, however, would come with increased investment risk that would be borne by the individual owning the account. Resolving Social Security’s long-term financing problem within the program’s current structure would require increasing the program’s revenues, decreasing its expenditures, or both. By combining various options, it would be possible to restore Social Security’s actuarial balance for the next 75 years without changing the program’s benefit or financing structure. A summary table on the estimated effects of various options appears as appendix I. The options for increasing revenues include expanding coverage to additional workers, raising the payroll tax rate, expanding taxable payroll through increasing the maximum taxable earnings level or including nonwage compensation as covered earnings, increasing the income taxation of Social Security benefits, using general revenues, and changing investment policy to earn a higher rate of return on the Trust Funds’ assets. The options for reducing expenditures include eliminating or reducing some existing benefits; reducing initial benefits through changing the current benefit formula or increasing the NRA, the early retirement age (ERA), or both; and controlling the growth of benefits after entitlement through improving COLA calculations, limiting COLA increases, limiting the recomputation of benefits, restrengthening the earnings test, disallowing most “new dependent” benefits, or reducing benefits because of other income. A number of these options have been used in the past to ensure the solvency of Social Security. Increasing the element of advance funding within the current program structure is also a means of addressing the solvency problem. Increasing the Social Security Trust Funds’ assets would require determining how the government might best reserve those funds for future benefits. Revenues can be increased by expanding coverage, raising additional revenues through the existing payroll tax structure, and raising revenue from other sources. One way to increase revenues is to expand the number of jobs covered by Social Security. This option was first used in 1950. The original Social Security Act covered about 60 percent of the U.S. workforce. Today, about 96 percent of the workforce is covered. This option increases revenues relatively quickly and improves solvency for some time, since most of the benefits for the newly covered workers are future obligations. Most beneficiaries have received more in lifetime benefits than they have paid in payroll taxes. This would suggest that increasing coverage would have a long-term negative impact on the program’s solvency. However, the Advisory Council estimated that covering most of the remaining noncovered jobs would actually have a positive effect on program solvency because many of the newly covered workers would already be eligible for Social Security benefits because of earnings in other covered employment. A majority of the members of the 1994-96 Advisory Council recommended that all newly hired state and local government workers, who would not otherwise be covered by Social Security, be covered. They estimated that this change would represent a net improvement in actuarial balance equivalent to 0.22 percent of taxable payroll over the next 75 years, or about 10 percent of the currently estimated long-term revenue shortfall. Revenues could also be raised by increasing the OASDI payroll tax rate paid by workers and their employers (currently 6.2 percent of covered earnings for each) and by the self-employed (currently 12.4 percent). Until 1978, this action was taken quite regularly, usually by announcing scheduled increases some years in advance to give workers and employers time to adjust. The 1977 amendments to the Social Security Act were the last to raise the OASDI rate for workers and employers (to 6.2 percent, effective in 1990). The 1983 amendments raised the payroll tax rate for the self-employed to 12.4 percent, effective in 1990. No future increases are scheduled even though the retirement of the baby boom generation is imminent. Raising the payroll tax rate by about 1.1 percentage points for both employees and employers could eliminate the program’s currently projected long-term revenue shortfall. One advantage raising the payroll tax has over several other revenue-enhancing options, from both programmatic and federal budget perspectives, is that it would not result in higher future benefits because benefits are based on covered earnings, not total contributions. Raising revenues by expanding coverage or expanding the definition of taxable earnings, on the other hand, would result in future benefit increases for the affected workers, thereby reducing the net long-term gains to the program and to the federal budget. Disadvantages of raising the payroll tax include lower disposable income for workers and higher labor costs for employers. Moreover, a higher payroll tax would also lower the value of the program to workers because future benefits for them and their dependents and survivors would not increase. Because employers’ additional costs would be tax-deductible, their business income taxes would fall, but by less than the payroll tax increase. The end result would be that employers’ net incomes would fall somewhat, and federal income tax revenues would decline. In addition, the Congress might be reluctant to further increase the payroll tax rate because (1) it and other tax rates are already considered too high by many, (2) many workers already face higher payroll taxes than income taxes, and (3) the payroll tax is regressive. The Advisory Council concluded that there is little political support for bringing the program back into financial balance through payroll tax rate increases alone. However, all three Advisory Council proposals contained payroll tax increases as a part of their recommended solution to the program’s solvency problem. One recommended an immediate and permanent payroll tax increase, one a permanent increase beginning in about 50 years, and one a temporary (70-year) increase. Moreover, the Medicare program faces a more immediate solvency problem than does Social Security, and increasing the payroll tax rate to improve the long-term financial solvency of one program limits the extent to which this option can be used to improve the long-term financial solvency of another. There are two ways to expand the taxable payroll base: raising the maximum level of earnings subject to the payroll tax and including some nonwage compensation in the definition of taxable payroll. Over the years, the maximum taxable earnings level has risen from $3,000, initially, to $68,400 in 1998. In 1995, covered earnings accounted for about 88 percent of all earnings for employees and about 72 percent of reported self-employment net earnings. Overall, about 87 percent of all earnings were covered by Social Security. The maximum taxable level is automatically adjusted to the growth in national wages, and this generally increases program revenues over time. While increasing the taxable earnings level would generate additional program revenues immediately, it would also increase future costs by raising benefits for those high earners who would pay the additional payroll taxes. However, because the additional covered earnings generally would increase the benefits of high earners only modestly (recall that the rate of earnings replacement for the highest increments of the AIME is only 15 percent), raising the maximum taxable earnings level could increase revenues in both the short and long run. Social Security actuaries estimated that raising the maximum taxable earnings level in 1997 and later so that 90 percent of all earnings were taxable (a 3-percentage point increase over current levels) would improve the program’s long-range actuarial balance by 0.48 percent of taxable payroll, or the equivalent of about 22 percent of the program’s estimated 75-year financing shortfall. Over the past few decades, the proportion of total compensation paid in the form of wages and salaries has declined, and nonwage compensation (payments for pension contributions and health insurance, for example), which is not subject to the payroll tax, has risen to about one-third of payroll. This increase in the benefits portion of total compensation has reduced the relative amount of total compensation subject to the payroll tax. Social Security revenues could be increased if some or all of these nonwage compensation costs were included in the definition of taxable payroll. Estimates made for the Advisory Council suggest that including employer-provided group health and life insurance or pension and profit-sharing contributions in OASDI taxable earnings would improve the program’s long-term actuarial balance by 0.80 and 0.37 percent of taxable payroll, respectively. Combined, these two options represent about one-half of the anticipated financing shortfall. This option could present some difficulties in implementation, however. Employee benefits generally are greater for highly paid workers whose wage compensation may already exceed the maximum taxable earnings limit. Thus, subjecting their nonwage compensation to the payroll tax would not raise additional revenues. Also, it could be difficult to separate nonwage benefit costs on an individual basis. If such an individual allocation could be made, the increase in taxable payroll would increase the future benefits of many workers. An alternative would be for only employers to pay the additional tax on nonwage compensation. Subjecting all employer-sponsored private pension and profit-sharing contributions to a 3-percent payroll tax and crediting these contributions as earnings to individual workers would improve OASDI’s long-term actuarial balance by an estimated 0.15 percent of taxable payroll. Up to one-half of Social Security benefits have been subject to individual income taxes since 1984. These revenues are returned to the Social Security Trust Funds. Taxing Social Security benefits can be considered either a form of means testing benefits—because one’s total Social Security benefit is effectively reduced as income rises—or a way to partially fund the program out of general revenues. Increasing revenues by taxing Social Security benefits could be accomplished by several means, including lowering or eliminating the income thresholds at which benefits become taxable, taxing all benefits above the amount of the employee’s contributions, redistributing to Social Security the portion of benefit taxation currently going to Medicare, and treating all Social Security benefits as normal taxable income subject to the current income tax rules. Eliminating the thresholds but otherwise keeping the benefit taxation provisions as they are is estimated to improve the program’s long-term actuarial balance by 0.21 percent of taxable payroll. Lowering or eliminating the thresholds would require increased income tax payments from some lower-income beneficiaries; higher-income beneficiaries would not contribute more unless the proportion of benefits subject to this tax was also increased. Taxing all Social Security benefits that exceeded the worker’s own contributions would save another 0.15 percent of taxable payroll. Shifting the HI portion of benefit taxation to OASDI would save 0.36 percent, but at the expense of worsening Medicare’s solvency problem. Finally, making all Social Security benefits subject to the income tax while keeping the current thresholds in place would increase income taxes for both those higher-income beneficiaries currently paying the tax on Social Security benefits and those whose total incomes are close to, but below, the current thresholds. The program’s revenues could also be increased by partially funding the system with money from other government revenue sources. General revenue funding of the program has been used in the past, most notably during the program’s 1982-83 financing crisis. General revenue financing of a portion of Social Security expenses could be accomplished by dedicating a portion of existing general revenues to the Social Security program; creating a new tax, such as a national consumption tax, with proceeds dedicated to Social Security; and reducing expenditures on other federal programs and using the cost savings to help fund the program. Currently, the Trust Funds are invested in Treasury securities that earn a relatively low rate of return. Investing a portion of Social Security Trust Funds in the stock market could increase the return to the fund, albeit with a risk of capital loss. While stocks and other investments do not outperform Treasury securities every year, they have, over the long term, performed much better. Higher investment earnings could extend the life of the Trust Funds without other program changes. As we reported previously, investing the projected Trust Funds’ surpluses, absent other changes to the Social Security program, could extend the life of the Trust Funds by almost 11 years, assuming stock returns remained at the historical average. If this were implemented in isolation, the Trust Funds would inevitably have to liquidate the stock portfolio to pay promised benefits and would be vulnerable to losses in the event of a general stock market downturn. While stock investments alone would not completely address the program’s long-term solvency, they could lessen the size of other program changes needed to bring the program to solvency. This option is addressed in greater detail in the advance funding discussion later in this chapter. Until the 1970s, most attempts to address financing problems focused on increasing program revenues. But expenditures can be controlled, or reduced, in numerous ways, including eliminating or reducing some existing benefits, reducing initial benefit levels, and slowing the increase in benefits once they have been initiated. Eliminating benefits has been used only sparingly in the past, most notably in the early 1980s when the following benefits were abolished: the minimum Social Security benefit for those attaining age 62 after 1982, child benefits for students aged 18 to 22, and benefits for (widowed) mothers and fathers whose youngest nondisabled child has attained age 16. Reducing benefits for selected beneficiaries has been used a little more often. In 1967, a limitation of $105 per month was placed on spousal benefits, but this limit was quickly removed in 1969. The process for determining Social Security benefits was modified in 1977 to offset unintended increases in initial benefit levels that resulted from a benefit calculation process first used in 1975. In 1980, the method of computing the applicable family maximum benefits on the basis of the earnings records of those who became disabled after June 1980 was changed in a way that effectively limited the total benefits the spouses and children of disabled workers could receive. Social Security benefits were also reduced in 1977 and 1983 for those who had pensions from noncovered government employment at the federal, state, or local level. In addition, the 1983 program amendments reduced benefits by delaying the COLA for 6 months and by raising the NRA for those born in 1938 or later. The spouses, children, and parents of retired and disabled workers, as well as survivor beneficiaries, receive Social Security benefits that are based at least in part on the covered earnings record of retired, disabled, or deceased workers. These benefits were added in 1939 to ensure that a worker’s family had adequate benefits once the worker retired; died; or, after 1956, became disabled. These benefits currently account for more than 25 percent of all program expenditures. No absolute measure of need or adequacy has ever been applied to these benefits. For example, eligible spouses receive a benefit based on one-half the worker’s PIA regardless of the amount of the worker’s benefit. At the end of 1996, 73 percent of the spouses of retired workers had their benefits based on PIAs of $800 or more, fewer than half of all retired workers had their benefits based on PIAs this high, and less than 40 percent of disabled beneficiaries but more than 50 percent of their spouses had benefits based on PIAs of $800 or more. The average PIAs on which children’s benefits were based also exceeded those of retired or disabled workers. Limiting spousal benefits could be accomplished by, for example, capping them at one-half the average retired worker’s PIA, or by phasing them out if the combined benefits of the worker and spouse exceeded a given threshold. The benefits of workers with low lifetime earnings and those of their spouses would continue to be paid as under current law, but the benefits for spouses of workers with higher than average PIAs would be reduced. Limiting spousal benefits to one-half the average PIA of retired workers as of December of the prior year is estimated to improve the program’s long-term actuarial balance by 0.21 percent of covered payroll. At the end of 1996, benefits for most types of survivors were also based on average PIAs that were higher than the average PIAs of retired workers, although not as high as PIAs for spouses. The maximum monthly benefit for a worker retiring at age 65 in 1996 was $1,284 in December of that year. More than 1 million beneficiaries receiving only survivor benefits at that time had their benefits based on PIAs of $1,100 or more, and 38 percent of these had benefits in excess of $1,250 that month. At the same time, about 200,000 beneficiaries were entitled to combined retired worker and survivor benefits in excess of $1,200 (averaging about $1,400). Thus, hundreds of thousands of survivor beneficiaries received benefits in excess of what a 65-year-old worker retiring in that year could have received. If it were desirable to do so, this situation might be addressed by, for example, capping survivor benefits at some percentage above the poverty threshold, at the average retired worker benefit level, or at the maximum benefit available to a worker attaining age 65 in the year the survivor became widowed. Costs could also be reduced by modifying children’s benefits. For example, eliminating benefits for nondisabled children of retired workers is estimated to save 0.05 percent of taxable payroll. Also, the level of benefits for children of disabled and deceased workers could be made dependent on the earnings that continue to come into the household from the nondisabled or nondeceased parent and not just on the child’s own earnings. There is already a precedent for this type of reduction, in that the benefits of auxiliary beneficiaries can be reduced not only by their own earnings but also by those of the retired worker. This action would save about 0.04 percent of taxable payroll over the 75-year period. Capping or eliminating certain spousal, survivor, and dependent child benefits, or tying them to the amount of household income, could ensure that lower-earning families continue to receive adequate auxiliary benefits while higher-earning families do not receive benefits that are difficult to justify on adequacy grounds. The disability insurance (DI) program has been one of the fastest growing Social Security-administered programs over the past 10 years. Controlling the growth in the DI program would be an important way to control overall program expenditure growth. This could be done by tightening program eligibility requirements; making determinations of eligibility at various review levels more consistent; taking action to encourage DI beneficiaries to return to work; limiting how long DI beneficiaries can be on the rolls;reducing DI benefits by lowering initial levels of all benefits; and limiting the initial disabled worker benefit to the retired-worker benefit available at age 65, using the current law’s increasing retirement ages and adjustment factors. This last means of reducing disabled-worker benefits is estimated to improve the program’s long-term solvency by 0.40 percent of taxable payroll. Expenditures for retired-worker benefits will increase rapidly once the baby boom generation begins to retire. To help control these anticipated expenditure increases, initial benefits for all beneficiaries could be reduced through (1) changing the current benefit formula and (2) increasing the NRA or the ERA—or both. Reducing the growth in benefits once they are received is also an option. Benefits for those born in 1929 or later are based on the average of a worker’s 35 years of highest indexed covered earnings. Earnings received before age 60 are wage-indexed to the year the worker turned age 60.Once the average indexed monthly earnings are determined, a formula converts them to the PIA. Benefits equal 90 percent of average earnings up to a threshold ($477 for 1998), plus 32 percent of average earnings above this first threshold until a second ($2,875) is reached, plus 15 percent of average earnings the worker might have above this second threshold. The PIA is then adjusted for the age the worker first receives benefits. The benefit is lowered if benefits are first taken before the NRA (currently age 65) and increased if benefits are first received after the month the worker attains the NRA but before age 70. Initial benefits could be reduced by changing the values of components of the benefit formula—for example, increasing the number of years of earnings included in the computation period from 35 to 38, as a majority on the Advisory Council advocated. The indexed earnings of the additional 3 years would, by definition, be no larger than the indexed earnings of the year of lowest earnings included under current rules. This change would result in a decrease in both average indexed earnings and benefit amounts for all new beneficiaries. The reductions from extending the computation period would be larger for those with limited or intermittent attachment to the labor force than for those with continuous attachment, because more years of $0 earnings would be included in the computation formula—for example, women would be more affected than men. According to the Advisory Council’s report, increasing the computation period would reduce benefits by 3 percent, on average, and improve the program’s long-term actuarial balance by 0.28 percent of taxable earnings. Those with 35 or fewer years of earnings, however, would experience about an 8-percent decrease in AIME, and many beneficiaries with fewer than 36 years of earnings already have relatively low AIMEs. This change would reduce the benefits for those with low lifetime covered earnings more than for those with high lifetime covered earnings. A $1 decrease in AIME could reduce the PIA of a low earner by 90 cents, while the PIA of the highest earners would be reduced by only 15 cents. Another way to reduce initial benefits would be to lower either the rates of earnings replacement or the bend points that convert average earnings to benefits. Reducing all replacement rates would reduce benefits for everyone, including those with the lowest AIMEs and benefits. Gradually reducing each of the three replacement rates by 0.5 percent between 2020 and 2029 and maintaining them at the new, lower levels thereafter is estimated to improve the program’s long-term actuarial balance by 0.29 percent of taxable payroll. Reducing the bend points would protect the benefits of those with the lowest benefits but reduce benefits for everyone with average earnings above the new (lower) first bend point. Indexing the bend points in the benefit formula by either the current consumer price index or the annual wage index minus 1 percentage point rather than by the average wage index would be expected to reduce the new benefit rate of growth. Either index adjustment would improve the program’s long-term actuarial balance by 1.54 percent of taxable payroll, about 70 percent of the long-term financial imbalance. Initial benefits could also be reduced by increasing the reduction factor for early retirement and reducing the incremental increase for first receiving benefits after the NRA. In addition, the benefit formula could be reduced by indexing benefits to a younger age than age 60 or by using an index that grows more slowly than national wages. These last changes would reduce Social Security’s measure of lifetime covered earnings which, in turn, would reduce calculated benefits. An increase in the NRA would be tantamount to a graduated benefit reduction for all affected beneficiaries. Some policymakers are concerned that this additional reduction in benefits for those who retire early—especially for those who have health problems and for those who are widows—would reduce the adequacy of their benefits and result in an impoverished retirement. The NRA has already been increased once. The package of program changes used to resolve the program’s 1982-83 financing crisis included a provision to gradually increase the NRA from age 65 to age 67 beginning with those born in 1938 (and attaining age 62 in the year 2000). The NRA increase will be fully phased in for those born in 1960 or later. However, the ERA of 62 was not changed. Increasing the NRA further can be justified because life expectancies at age 65 are longer now than they were in 1940, the year benefits were first paid. The longevity trend is an important reason for the growth in Social Security costs. Increasing the NRA would be one way to control program costs because benefits available at all ages would be lowered, and this could provide an incentive for some workers to delay their initial receipt of retired worker benefits. How much to increase the NRA would depend on the goal of the increase. If the goal was to keep the program solvent, the increase in the NRA could be calculated once the other actions to maintain solvency had been decided on. However, the goal of increasing the NRA could also be either to keep life expectancy at the NRA constant (using life expectancy at age 65 in 1940 or some other year as a base) or to maintain a life expectancy at the NRA that is a constant proportion of one’s life expectancy as an adult (life span after age 20). For example, in 1940 at age 65 the average life expectancy was just under 13 years. To keep the same 13-year life expectancy at the NRA in 1995, the NRA would have had to be age 72. Alternatively, in 1940 the average person aged 65 would have expected to spend about 22 percent of his or her adult life older than the NRA. In 1995, spending 22 percent of one’s adult life above the NRA would require an NRA of age 70, using the Social Security Actuary’s projections of life expectancies. Given either of these two goals, the NRA would need to be increased as life expectancies continue to improve. More than 50 percent of newly retired workers elect to receive benefits at age 62. Increasing the ERA would preclude workers from claiming benefits between age 62 and the new ERA and could, therefore, increase the incentive to apply for DI benefits at those ages. Social Security would receive some short-term financial savings because these potential beneficiaries would have to delay the receipt of benefits. However, because benefits are adjusted on an actuarial basis, the initial benefits of affected workers would be larger than if the ERA had remained at age 62, and long-term program savings would be low. Raising the NRA, the ERA, or both could place a large burden on the DI program and result in lower net savings than might be expected. Raising the NRA would increase the reduction factor applicable to those retiring at the ERA, giving them lower benefits than they currently receive. Raising the NRA would not reduce the amount of the DI benefit, however, unless DI benefits were reduced independently. The benefit gap between DI benefits and the new, lower retirement benefits for everyone below the new NRA would rise, providing an incentive for some, who would not otherwise do so, to apply for DI benefits. DI caseloads and costs would grow if the number of applicants increased and, if some of these additional applicants were allowed on the DI rolls, DI benefit costs (and total OASDI costs) also would increase. In addition to reducing the level of initial Social Security benefits, controlling the growth of benefits after initial receipt is another way to reduce program expenditures. Various possible actions are discussed below. Since 1975, Social Security benefits have been automatically increased to keep pace with inflation using the consumer price index as the inflation index. This automatic increase allows benefits to maintain their purchasing power over time. However, COLAs are costly. Social Security currently pays about $370 billion a year in benefits. Each 1-percent increase in the COLA costs the program an additional $3.7 billion. Because COLA increases are cumulative, their impact on program expenditures grows rapidly. For example, those who first received benefits in the first half of 1975 currently receive monthly benefits that are 187 percent higher (in nominal terms) than their original monthly benefit; that is, for each $100 received in early 1975, $287 is received in 1998. Recently, a congressional commission reported that the consumer price index overstates the true rate of inflation on average by about 1.1 percentage points yearly, and that this may result in overcompensation of beneficiaries. Many economists agree that the consumer price index probably overstates the rate of inflation but differ on the degree. Even the Bureau of Labor Statistics, which calculates the increase in the index, consistently states that it is not a measure of inflation. Improving the calculation of the COLA, either by making the consumer price index a more accurate measure of inflation (which is technically difficult to do) or by adjusting it after the fact to better measure true changes in inflation, is a desirable option. Given the direction of the current bias in the index, such an adjustment would lower yearly COLAs and result in long-term improvements in the program’s solvency. Reducing COLAs could control the growth in Social Security benefit expenditures. Expenditure savings would be apparent immediately, and savings in 1 year would carry forward in later years in a cumulative manner. In addition, COLA reductions would affect current as well as future beneficiaries, spreading the burden of the program’s financial reform over a broader population. Not all other actions to resolve the program’s long-term solvency problem would affect current beneficiaries. COLA reductions could be achieved by several means, including lowering the COLA to less than the measured rate of inflation (for example, consumer price index minus 1 percentage point); capping the COLA (increasing benefits by the consumer price index increase or, for example, 2.5 percent, whichever is less); delaying the COLA; eliminating the COLA; changing the index used to measure the COLA; not providing a COLA until cumulative inflation since the previous COLA increase exceeds a specified threshold, such as 5 percent; and allowing a full COLA up to some specified threshold (for example, the average PIA amount) and then reducing or eliminating COLAs for benefits above that threshold. These alternative ways of reducing COLAs would have differing impacts on certain individuals and households. For example, changing the COLA by reducing the consumer price index by 1 percentage point forever would gradually reduce the purchasing power of benefits as beneficiaries age. A reduction in the COLA from, for example, 3.5 percent to 2.5 percent annually would reduce the purchasing power of benefits by about 9 percent after 10 years, 22 percent after 25 years, and 32 percent after 40 years. Alternatively, giving full COLAs for benefits below some threshold (the average PIA amount, for example) and giving reduced or no COLAs for benefits above that threshold would fully protect the purchasing power of benefits for those with low benefit levels while gradually reducing it for those with higher benefit levels. Reducing COLAs would have an important drawback, however. The purchasing power of Social Security benefits would gradually shrink over time. As they age, some beneficiaries with little or no additional retirement income could be pushed into poverty as a result of COLA cuts. This could be a particular problem for single (widowed, divorced, or never married) elderly women who already have one of the highest poverty rates of any population subgroup in the nation. In 1994, 22 percent of single women aged 65 or older lived in poverty, and another 12 percent had incomes between 100 percent and 125 percent of the poverty line. As more beneficiaries fell into poverty, more would become eligible for government-provided safety net programs, such as Supplemental Security Income (SSI). Increases in the costs for these safety net programs would partially offset the savings to Social Security from the COLA reductions. The benefits of those who continue to work after age 62 are recomputed to account for their new earnings, even if they receive benefits while working. If their current earnings are larger than the smallest earnings currently used in calculating their current benefit level, the new earnings will replace those smallest earnings, and their benefits and those of their dependents will increase for all future years. Another way to reduce future program costs would be to limit the recomputation of benefits, which could be done by allowing recomputation of the benefits of only those who did not receive any benefits during the year they worked; capping benefits at the maximum benefit payable to someone in that worker’s birth cohort who first drew benefits at age 65, adjusted for subsequent COLAs; or applying any benefit recalculation only to the worker’s own benefit and not to any dependent benefits based on his earnings record. However, those who currently work and receive Social Security benefits could argue that they are paying payroll taxes on their current earnings and that these earnings should be included in the benefit recalculation if it is to their advantage. The earnings test was originally designed to control program costs by ensuring that only those who lost their earnings because of retirement would receive benefits. However, the earnings test has been relaxed many times over the past 60 years. This relaxation of the earnings test has been very costly to the program. SSA estimates that, in 2000, it will pay about $80 billion to working beneficiaries and their dependents, about 20 percent of the program’s estimated total benefit expenditure. This does not mean Social Security benefits would be reduced by $80 billion yearly if a draconian earnings test were reintroduced, however, because many of those who currently work and receive benefits would choose to forgo their earnings rather than their benefits. The earnings test could be strengthened by reducing the threshold at which the test first applies by (1) either increasing the amount Social Security benefits are reduced for each dollar of earnings above the threshold or reducing benefits by a given percentage for each dollar of earnings above the threshold or by (2) increasing the age at which the test no longer applies, perhaps in line with any increase in the NRA. Disallowing dependent benefits for those who were not dependents when the beneficiary became entitled to his or her current benefits is another means of controlling the growth in benefits after entitlement. Exceptions might be made for newly born children who were being carried by a pregnant beneficiary or spouse when the beneficiary became entitled to benefits and for dependents who are not yet eligible for auxiliary benefits because they do not yet meet all eligibility requirements, such as age requirements. Some have suggested reducing program costs by means testing Social Security benefits. To an extent, means testing is already being done via the income tax on benefits and the earnings test. Means testing via these options could be enhanced as discussed earlier in this chapter. Benefits for some could also be eliminated or reduced further by more traditional means testing, which would act essentially as a tax. Means testing works by determining whether a beneficiary has other income above a specified threshold and then either eliminating the benefit if the “income from other sources” threshold is exceeded (implying an infinite tax rate) or reducing the benefit according to some formula related to how much the other income exceeds the threshold (the formula determines the tax rate, which could be 100 percent or even higher). A means test need not be based on all the non-Social Security income of a beneficiary. Social Security benefits could also be reduced, regardless of the beneficiary’s gross income level, if the beneficiary had income from a specified source, such as savings income or a pension—an alternative already being used to reduce the Social Security benefits of many federal, state, and local government workers who receive pension benefits from employment not covered by Social Security. But a means-test tax could lead to economic inefficiencies by changing individuals’ behavior. For example, if having any other retirement income could cause a reduction in Social Security benefits, some workers might be reluctant to save for retirement, whether through employer pensions, individual savings, or any other means-tested vehicle. Such workers might prefer to spend their earnings before they retired rather than have their saved earnings reduce retirement income they otherwise would have received. Such a reallocation of consumption from the future to the present could reduce our already near-historically-low national saving rate. This type of behavior can be seen when people shift their income, assets, or both to family or other entities so they can qualify for government-provided Medicaid, SSI, or long-term care. Means testing benefits would eliminate or further reduce Social Security benefits for many higher-earning beneficiaries. But these individuals tend to pay the largest amount of payroll taxes and receive the smallest percentage return on those contributions. Moreover, means testing the benefits of these individuals could undermine their political support for the Social Security program, and their support is essential if Social Security is to maintain its financing and benefit structures. Although Social Security’s long-term financing problem could be addressed without significant change to the primarily pay-as-you-go approach currently in use, some have proposed that the solvency problem could be better addressed with greater reliance on advance funding. Two main mechanisms for advance funding exist within Social Security’s government-managed structure: advance funding through a buildup of Treasury securities and advance funding through government investments. Currently, the Treasury issues its securities to the Trust Funds in exchange for the program’s excess revenues. These securities are backed by the U.S. government and have virtually no risk of default. However, they also represent obligations the government issues to itself. From the Social Security Trust Funds’ perspective, these securities represent program assets—they signify a reserve budget authority that can be used to meet future benefit obligations. However, from the perspective of the rest of the government, these securities are not assets but claims against the Treasury. One method of advance funding Social Security would essentially retain the program’s current financing, Trust Funds, and benefit structures. Indeed, the current program is already building up a sizeable, but temporary, level of assets that could be used to pay some of the benefits the baby boom generation will need once it retires. The degree of buildup could be enhanced by increasing the program’s excess cash revenues through increasing revenues or decreasing expenditures. For example, program revenues could be raised by increasing the total payroll tax by 2.19 percentage points and “investing” all excess revenues in Treasury securities. This change would increase the Trust Funds’ buildup and extend the program’s solvency by more than 40 years. However, at the end of the 75-year period, the Trust Funds would be expected to contain only about 1 year’s worth of benefits. The estimated impact of a 2.19-percent increase in the payroll tax rate on the Trust Funds is compared with the expected impact on the Trust Funds under the current payroll tax rate in figure 3.1. This change would result in higher excess program revenues in the near term and a maximum Trust Funds balance-to-expected-expenditure ratio that would almost double from about 3.2 under the current law to 6.35. But this higher Trust Funds-to-expenditure ratio would present a formidable challenge to future Congresses when they needed to redeem these assets. Increasing the program’s excess revenues and, thus, the amount of Treasury securities held by the Trust Funds could exacerbate the concerns that are voiced today about whether the monies in the Trust Funds are really saved. The Treasury uses the cash received from issuing securities to the Trust Funds to finance other government activities, thereby reducing the Treasury’s need to borrow from the public. Some are concerned that this action both masks the size of the deficit in the non-Social Security component of the federal budget and allows the Congress to spend these Social Security revenues on other programs in the short term without addressing the long-term consequences of this action. Under these conditions, the improvement in Social Security financing would not contribute to increased national saving. It would only allow the Trust Funds to build up more claims against the Treasury without enhancing the nation’s future ability to meet these increased claims. One way that a buildup of Social Security’s excess revenues could contribute to national saving would be to use these revenues to buy down the nonfederally held portion of the gross debt (the debt held by the public). This action would not only free up resources and allow them to be used more productively in the private sector of the economy but also reduce the size of future cash interest payments the government would otherwise have to pay. The resulting enhanced economic growth could increase the size of the future economy and make the government’s efforts to collect taxes or borrow to fund future Social Security benefits easier than if the economy had not grown. However, if, after a number of years, Social Security’s excess revenues were more than sufficient to pay off the nonfederally held portion of the national debt, then additional productive means of investing these excess revenues would have to be identified. Advance funding Social Security through increasing purchases of Treasury securities would allow the current, familiar benefit structure to be maintained. Benefits could still be determined using the progressive benefit formula, which provides relatively higher benefits to those with low average lifetime earnings than to those with high average lifetime earnings. The protections beneficiary families now experience through disability, dependent, and survivor benefits could also be retained. Thus, the adequacy focus of the current program could be maintained. However, if benefit cuts were a part of the reform package, the adequacy goal of the program could be weakened. Additional excess revenues created by financing reform could also be invested by the federal government in the private equities market. Such a move would have two distinct advantages over using these excess revenues to purchase Treasury securities. First, insofar as Social Security’s excess cash revenues were invested in the private equities market, they would not be available to the federal government for other expenditures. Second, these investments could improve the rate of return the Trust Funds earn because, over the long term, investments in equities have historically outperformed investments in Treasury securities. However, such investments, while offering the opportunity for greater returns, also carry higher risks. For example, equity investments could expose the federal government to the risk associated with asset loss should there be a general market downturn. Should the Trust Funds’ equities need to be quickly liquidated to pay benefits, there is no guarantee of the prices they would bring. In contrast, Trust Funds Treasury securities can be readily liquidated, should the need arise, with no uncertainty about their value. From a federal budget standpoint, investing Trust Funds in the private sector would increase the federal deficit (or reduce the surplus), because the purchase of equities would be counted as an outlay under current budget rules; therefore, the funds used to purchase these equities would no longer be available to the rest of the federal government. If the deficit in the non-Social Security portion of the federal budget was not otherwise eliminated, the government would need to borrow an additional sum, up to the amount of the program’s excess revenues, from the public to pay for all its then-current expenditures. However, the increase in the federal deficit that would result from borrowing additional monies from the public would not increase the federal government’s debt. The Treasury securities would be held by the public rather than the Trust Funds. Equity investing by itself would not change the impact of federal finances on national saving if the equity purchases were offset by an equivalent issue of Treasury securities to the public. In the short term, such an asset shuffle could result in higher equity prices and higher interest rates. Even with higher equity prices, however, the returns to equities would generally be expected to remain above the rates of return from investing in Treasury securities. The increase in interest rates would raise interest income from new Treasury securities held in the Trust Funds, but it would also raise future interest expenditures for the non-Social Security component of the federal government. Equity investing would necessarily result in additional administrative costs for handling the investments: costs for hiring and training a staff to carry out the daily operations of the organization that oversees these investments, hiring a board and financial advisers to determine how to invest the Trust Funds, hiring fund managers to be responsible for actually investing the funds, and hiring and training staff to carry out certain oversight responsibilities. However, the increase in the government’s costs could be manageable because the majority of the operating and administrative needs of such a modified Social Security program are already in place. Other concerns about government investing in the equities market are that (1) the funds might not be invested with the goal of minimizing risks and maximizing returns; (2) the government might be tempted to steer these investments for politically motivated purposes, such as aiding financially troubled companies or industries or achieving socially desirable purposes; and (3) even if the government did not select an equity portfolio on the basis of political or nonfinancial objectives, the government might be able to affect corporate management decisions by exercising its stock voting rights. To minimize the first and second concerns and to control transactions costs, the government could direct its fund managers to select equities using a broad-based market equity index. However, the third concern would remain unless the government either assigned its stock voting rights to its fund managers or forbade itself from exercising these rights. In this latter case, the power of the voting rights held by the remaining large stockholder groups would be enhanced. Most proposals to restore long-term solvency to Social Security include the creation of a system of individual accounts. Some proposals have the government managing the accounts, but others leave it largely to the individual to make the investment decisions. The key question raised by these proposals is how well individuals and households might do if part of their retirement income that now comes from Social Security depended on the performance of their individual accounts. Such a movement to individual accounts involves a trade-off between higher returns and higher risks. Historically, stocks and bonds have yielded higher returns than the implicit return that current workers can expect from Social Security. Nevertheless, consideration should be given to the added risks associated with individual accounts. The Congress would need to decide how the social adequacy goal would continue to be met under such a system and determine how the social insurance elements of the current program, such as disability and survivor benefits, would be provided. Implementing individual accounts raises other issues as well. Making the transition to advance funded, individual accounts would require some to “pay twice”—once for current beneficiaries’ retirement benefits, and once for their own. In addition, major issues, such as whether beneficiaries would be required to annuitize their accounts and what changes would be necessary for administering the program, would need to be addressed. These issues would need attention regardless of whether the accounts were managed by individuals or by the government. Individual account systems generally aim to add to the retirement income provided by Social Security. Proponents of individual accounts argue that the returns to payroll taxes have fallen and will continue to do so. Returns in the early years of the program were high because, for adequacy reasons, the benefits received far exceeded what could be justified given the contributions the earliest retirees made to Social Security while they worked. As the program matured and workers spent increasing time in the covered workforce, the high initial benefit subsidies declined as did the implicit rate of return on contributions. At the same time, because average real returns to stocks and bonds are higher than the return from Social Security, individuals have the potential to be better off if their contributions to Social Security are invested in individual accounts. There have been a number of studies aimed at demonstrating the advantages of individual account proposals. The Advisory Council presented, for various individual and household configurations, estimates of the returns on contributions for its three proposals. In general, the estimates suggest that the PSA plan, which most closely represents the annuity-welfare concept, might provide superior retired worker benefits for many individuals. (See app. II.) A primary concern in moving to individual account plans is the increased risk to the security of retirement income. Historically, Social Security has offered near certainty regarding benefit receipt. The uncertainty that can surround the amount and lifelong receipt of nonannuity, privately provided retirement income is, in fact, one of the major rationales for public provision of retirement income. Individual accounts introduce elements of market risk and other risks currently borne by the federal government. Markets are volatile, and, while they can generally be expected to provide better returns than bonds over the long term, they have had periods of substantial downturn that lasted for some years. Pensions hold a majority of their assets in stocks, and even individuals hold substantial amounts of their savings in accounts that are invested in stock market equities—such as IRAs, mutual funds, voluntary 401(k) plans, and so on. Thus, if a significant portion of Social Security income also depended on the market’s performance, a broad and long-lasting market downturn could have a negative impact on a large portion of retirement income. Even if the market experienced no dramatic or long-lasting downturns, the normal market cycles will create “winners” and “losers,” depending on when and how workers invest their “Social Security” assets in the market and when they liquidate their holdings. Individuals with similar work histories could receive substantially different benefits. As long as workers are aware of and accept this risk, there will probably not be calls to fix the “unfair benefit outcomes.” However, if such large differences in outcomes become commonplace, many participants could become dissatisfied with the program. If individual account proposals were implemented, the question of how to preserve the goal of income adequacy would need to be answered. Many proposals based on the annuity-welfare model seek to minimize the redistributive aspect of Social Security and focus on providing a basic income floor or minimum benefit. Thus, one issue involves determining the appropriate level of “social adequacy” for the social insurance system. Proposals for individual accounts focus primarily on the retirement benefits portion of the program, but the current Social Security system also includes ancillary benefits that may not be easily obtained or duplicated in the private market. It is important, then, to consider how creating individual accounts would affect these other elements of the benefit package—in particular, disability benefits and benefits for dependents (spouses, children, and survivors). Social Security also has important interactions with other retirement income sources: pensions, personal savings, and earnings play substantial roles in determining the level of income that individuals and households will have in retirement. The annuity-welfare concept of social insurance leads to questioning the appropriate role for the government in providing retirement income. The emphasis under this approach is on separating the annuity part of the program, in which benefits are directly linked to contributions, from the redistributive or welfare part of the program, in which the benefits of the less fortunate are raised to a “more adequate” level. The existing Social Security program embodies the idea that these decisions should be made jointly in the context of a universal program of retirement income (social) insurance. Ascertaining the real difference between these opposing conceptions of social insurance may be difficult, but a key part of the difference relates to the “process” for deciding the relative importance given to the components of redistribution and contributory insurance. While under each of these concepts the political process would sort out the relative importance of the components, the main thrust of the annuity-welfare view is to make the redistributions more explicit—that is, more visible to program participants, voters, and political decisionmakers—than is the case under the existing structure. While discussions of social adequacy often address the poverty issue, it does not necessarily follow that these discussions determine the level of support that should be provided. The provision for retirement income spans an individual’s entire lifetime, and it is particularly important to consider various incentive and efficiency effects of any social adequacy level that is provided. The obvious consideration is that if the safety net benefit level is set too high, then work and savings disincentives could arise, and some workers could be encouraged to “free ride.” But if the level is set too low, then some individuals could live out their retirement years in extreme poverty. Incentive effects are a major rationale for the contributory aspect of Social Security. An individual’s benefit must be “earned” by making contributions. In considering the social adequacy level in the context of the program structure, several ideas have been advanced. Some favor a better targeting of the redistributive component through means testing. One idea behind proposals for means testing is that the existing design of Social Security provides benefits to all income groups, and often the redistributive aspect is not well focused on the needy. Advocates for the existing structure of Social Security point out that the program is, in fact, designed to avoid the pitfalls of means testing, which create both stigma and work and savings disincentives for low earners. Some proponents of the annuity-welfare concept have raised the idea of a flat benefit, or “demogrant.” Since everyone would receive the demogrant, many of the work disincentive effects would be minimized, particularly if the demogrant was not set at too high a level. With the demogrant, the redistribution would be addressed in a way that was visible politically. It could even be financed with general revenues. This type of financing would be consistent with strengthening the linkage between contributions and benefits in the annuity part of the program. Also, the demogrant could avoid the stigma that means testing would introduce, since it would go to everyone. The current program and the demogrant approach are similar in their effects, with the major difference being how the decision about the social adequacy level is arrived at in the political process. The fundamental issue for social insurance, then, is what level of social support society wants to provide to its elderly. Even providing a level of support far below the poverty level is likely to carry substantial cost.Another important aspect is the notion of minimizing the stigma that is usually associated with the receipt of transfers (that is, “welfare”). Also, an important consideration that is often overlooked is the role of the SSI program. Depending on the design of reforms, the existing SSI program might be expanded to serve more people. Proposals could be devised to include a demogrant, which might absorb the role played by SSI. Disability and dependents’ benefits are often not included in the discussion of individual accounts because it is, in principle, possible to separate them from retirement benefits. Retirement, disability, and auxiliary benefits, respectively, account for approximately 68.1 percent, 10.5 percent, and 21.4 percent of all benefits paid. Separating the “price components” of the various parts of Social Security would mean that disability and auxiliary benefits could be maintained in the presence of individual accounts for a part of the retirement benefits portion of Social Security. However, it would also imply that the administrative apparatus of Social Security, including the reporting of earnings by employers, would have to be retained. There is also the question of whether the disability and dependent portions of OASDI could be better provided through private markets. Disability insurance is provided by private insurers and through group insurance arrangements financed by employers. However, a key feature of the benefits provided by Social Security is that they are universal—that is, they are available to everyone regardless of age or occupation. This would generally not be the case with individual disability insurance policies, and even the current employer-provided group arrangements might be subject to certain restrictions. A voluntary private disability insurance program, combined with insurers who might want to avoid the problem of adverse selection, suggests that comprehensive disability protection would be available to some only at a high price. At the same time, it is difficult to assess how private markets might perform in providing various insurance substitutes given that Social Security today plays such a major role in providing such benefits. If the private sector were to play a larger role in providing disability benefits, it might be necessary to enact laws that require private providers to offer certain benefits or features. An example of this is the recent preexisting condition legislation in the health care area. While disability benefits would largely be unaffected under the Advisory Council’s MB proposal, the IA and PSA proposals reduce these benefits. Under the IA proposal, the essential structure of DI would remain intact, but the benefits for DI beneficiaries would be reduced because individual investment benefits needed to offset the reduction in program benefits would not be available until age 62. DI benefits would also be heavily affected under the PSA proposal and could be reduced by as much as 30 percent from today’s DI benefit levels. These DI beneficiaries would not have access to their individual accounts until age 65, the proposed early retirement age under the PSA proposal. With respect to dependents’ benefits, individual account proposals imply reduced spousal benefits. With individual accounts, much of a person’s retirement benefit would depend on how well his or her own investments performed. Thus, unless spouses had their own individual accounts, they could be worse off than under current law. Those spouses who would not accumulate substantial assets in individual accounts might be eligible for a reduced spousal benefit or a demogrant. But it is also important to realize that the role of spousal benefits within the existing program structure may be declining in importance because of changes in women’s labor force participation. Survivor benefits would also be affected under proposals to create individual accounts. Currently, when a retired worker dies, the dependent spouse is eligible for a survivor benefit if it is higher than his or her own retired worker benefit. With individual accounts, the survivor could inherit the asset accumulation in the retired worker’s individual investment account. These assets could supplement any other Social Security benefits the survivor might receive. However, the deceased worker could also bequeath these assets to others. Even if these assets were left to the surviving spouse, the survivor could have a lower or higher benefit amount than under current law, depending on the survivor’s individual circumstances. The IA proposal would lower spousal benefits in order to increase survivor protection for two-earner couples. The post-WWII era has seen a general rise in living standards and a substantial evolution in the retirement income system. Social Security has provided the foundation for the retirement living standard of the population and has largely fulfilled its original intent in alleviating elderly poverty. But private pension coverage has also increased and now provides a substantial portion of retirement income for many of today’s elderly. Increases in home ownership and personal savings have meant greater wealth in retirement for many households. Incorporating individual account features in Social Security would have important implications for the entire framework that provides retirement income to the elderly. The debate over how to resolve Social Security’s financing requires recognition of the broader changes that may take place in response to any actions taken. Here we suggest but a few of the issues that might arise. The existing private pension system has traditionally provided a voluntary, private source of retirement income. Creating individual accounts is essentially aimed at further expanding the role of private institutions in providing retirement income. If this role were expanded, it is hard to imagine that the existing private pension system would not be affected. One obvious change would be in private pension plans’ “integration” with Social Security. Currently, some employers agree to provide a benefit that is adjusted by any amount received through Social Security. If Social Security benefits were reduced, then private employers with integrated plans might experience an increase in their pension costs that could prompt them to redesign their plans. It is also unclear how workers’ personal savings behavior might be influenced by a new system involving individual accounts. Economists have long debated various theories of savings behavior in the context of the effects of Social Security. This debate has largely focused on the theory that the promise of Social Security benefits would be viewed by individuals as a form of “social security wealth” that could result in lower saving. This fundamental debate about the behavioral effects of social insurance on personal saving is ongoing although, on balance, the prevalent view is that funded social insurance is more likely to be consistent with higher saving. Much recent research focuses on the effects of individual savings plans, such as IRAs and, particularly, 401(k) plans, and this research may yield useful insight into the possible effects of introducing individual accounts. Finally, individual accounts could affect workers’ decisions on when to retire. A number of factors affect an individual’s decision to retire. If the age at which a worker becomes eligible for full benefits is further increased, individuals might stay in the workforce longer. If individual accounts fulfill their promise of higher levels of retirement income, then workers may retire early despite the increase in the retirement age. Because by definition individual accounts are advance funded, a significant shift toward such a system would raise transition questions. The practical problem that would occur is that, because most of the benefit obligations of current retirees and workers are unfunded under pay-as-you-go, any diversion of current workers’ taxes to fund their own benefits would leave less with which to pay current and accrued retirement benefits. As a result, current workers would need to be asked to “pay twice”—once for the accrued benefits of current and future retirees and again for their own retirement benefits. In subsequent generations, workers would have to pay to fund only their own benefits. Although advance funding is generally associated with individual accounts, advance funding could be introduced without them. The system is already partially advance funded, and government’s investing a part of the Trust Funds in the stock market would represent an increase in advance funding. If the current program was terminated and a new fully advance funded one that included all current and future workers was started, the amount necessary to pay the accrued benefit obligations under the current Social Security system would be about $9 trillion. In principle, transition costs do not pose any greater cost than already exists under Social Security. Concerns about transition costs arise primarily because of the timing of paying for benefit entitlements. Under pay-as-you-go, the costs of paying accrued benefits occur in the future and represent an unfunded promise—a type of implicit debt. In moving to an advance funded system, the future benefits, which would need to be paid for eventually, would be recognized today. Paying these benefits could involve significant payroll tax increases or a sizable increase in government debt. Making a transition to an advance funded system would also present political difficulties. Reneging on benefit obligations or requiring current workers to “pay twice” could significantly disadvantage many individuals, and showing that a funded system was superior to pay-as-you-go would do little to ease the pain. It has been suggested that the current pay-as-you-go program has created a “lock-in” effect that was largely intended when the program was designed. That is, transition costs could prevent individuals from supporting a potentially superior alternative that offers higher benefits or returns until the benefits under the existing system become considerably worse than they would be under the alternative. Voters could continue to support the current program structure, which could increase the political costs of making a transition, and the existence of strong interest groups could add further to the political costs and difficulty of making such a transition. Nonetheless, the problem of “paying twice” could be mitigated in several ways. One way would be to reduce accrued benefits for the current retired generation. The IA plan attempts to address the long-term financing problem, for the most part, by reducing future benefits, including those already accrued, for current workers. Another way to mitigate the economic and political costs of transition for a particular generation of workers would be to push the costs of accrued benefits into the future. This would be similar to what a pay-as-you-go system does. Two ways to avoid putting responsibility for all of the transition costs on one generation of workers would be to levy special taxes on the entire population or to finance the transition through borrowing. Under the first approach, the accrued obligations of the Social Security system that came due, and that were in excess of the financing available, could be financed by levying an array of taxes. Each type of tax that could be used would have different impacts on individuals and the economy. A payroll tax would be consistent with the current financing of the program, but because of its regressive impact on lower earners, it might not be seen as desirable. Income taxes could reduce the impact on lower-earning workers and families but could have undesirable effects, such as increasing taxes on savings. Levying taxes to pay the accrued benefits, moreover, could still leave a substantial burden on the current generation of workers. Another way of financing the accrued obligations in transition to an advance funded system would be to use government borrowing. The government would issue bonds to finance the payment of benefits, and the bonds would be paid off in the future, which would spread the cost to future generations. Because the interest and principle on the bonds would be paid with future taxes, the use of bond financing would be a more effective way of spreading the cost of the transition than taxation. There are a number of ways to implement bond financing. One of the earliest ideas was to issue “recognition bonds,” which could be issued to individuals in recognition of the government’s intention to honor its benefit obligations. Individuals could redeem the bonds at retirement to provide a retirement benefit. Social Security is structured in a way that, upon reaching eligibility, workers receive a monthly benefit—that is, an annuity—for the remainder of their lives. With individual account plans, the worker might be able to choose one of several options for receiving benefits. Depending on the plan design, an individual’s account accumulation could be converted to an annuity, taken as a lump sum, left in place, or used for any purpose desired. While this approach offers greater freedom of choice, it also raises several concerns. One concern is whether the account accumulation would be intended to constitute a source of retirement income as opposed to simply a savings accumulation device that might not be fully used for retirement income. Another concern is the process of annuitization itself. Obtaining annuities individually or on a group basis in the private market could be more costly than having the government provide them. Related to this issue is the question of whether an individual could obtain an annuity that provided features similar to those currently provided by Social Security. One of the major goals of proponents of individual account plans is to ensure that individuals have as much freedom as possible in choosing how to allocate their own resources. Individual accounts can offer a large amount of freedom and choice and, in principle, there is no inherent reason why individuals should be required to receive their retirement income through an annuity. Thus, a fundamental issue of retirement income policy is how much the individual’s choice should be restricted in order to ensure that he or she does not become a burden on society in old age. The social insurance approach seeks to provide a “socially adequate” benefit, not a minimal benefit, that protects a substantial portion of the preretirement living standard. This perspective suggests that restricting an individual’s choice is justified to achieve a more socially desirable outcome. This perspective is embodied in proposals that restrict the individual to investing through the government and require annuitization of the account accumulations so that there is more certainty that an adequate retirement living standard will be achieved. An additional complication arises when individual borrowing provisions are considered as a feature of the individual account plans. This is an important issue with private pensions, particularly with 401(k) plans. Restricting borrowing from the accounts helps ensure that the funds constitute retirement income. Allowing borrowing provides more freedom of choice but does not ensure that the accounts will be used for retirement. In this case, the accounts of many might represent more of a tax-deferred saving vehicle than a retirement saving vehicle. A second major concern is whether those individuals who chose to convert their account accumulations to retirement income by purchasing an annuity would actually be able to do so. One of the major advantages of Social Security is the provision of a lifetime annuity. Private pension plans also provide an advantage because they are able to offer annuities through group arrangements. But those who have an individual account plan might have to obtain annuities in the individual annuity market. While individual annuities are available, they can be costly, especially relative to annuities provided through Social Security. This issue is compounded, since private annuities might not generally contain the same features as a Social Security annuity—features such as dependents’ benefits, inflation protection, and the use of unisex life tables (that is, the same assumed mortality rates for both men and women). It is difficult to predict, however, what would occur if an individual account system were put in place. Some retirees would prefer individual annuities over other payment options, and competition among financial institutions to provide such annuities would ensue. This could be a positive force in driving down the cost of annuities, but the possibility remains that firms would compete for what they perceived to be the best risks, which in this case could be those who are likely to have shorter lifetimes. Competition directed at avoiding adverse selection problems could result in market imperfections wherein certain individuals might not be able to obtain annuities at reasonable cost, and this might lead to calls for legislation or regulation restricting the ability of financial institutions to deny individuals an annuity contract. The government would potentially play some role in either (1) ensuring that insurance markets worked efficiently or (2) continuing to provide annuities when private markets failed to do so. Implementing individual account plans would raise a number of implementation issues regarding the cost of managing accounts and investments and how to manage financial flows and protect investors. Individual account plans would require creating financial accounts for each worker. This would be a huge undertaking, although arguably it should be feasible since existing financial markets and SSA are already able to handle large numbers of individuals and transactions. Nevertheless, depending on the design of the program—whether the accounts were managed by individuals or were managed for them by the government—the scale of new resources required could be large and could imply a significant expansion of the administrative structure of either the current program or investment firms (and employers) in the private sector. There are significant differences in the relative costs of publicly managed social insurance systems and privately managed individual account arrangements. Social Security is a large, centrally managed public system. The costs of Social Security were high relative to benefits paid during the early years of the program. As benefit payments have grown, the administrative costs of OASI as a percentage of expenditures has fallen to the rather low 0.6 percent of benefit payments experienced today. Administrative costs for individual account plans would depend greatly on the specific design. There would be initial costs in setting up the necessary systems. But the experience with Social Security suggests that while moving to an individual account system might involve large start-up costs, the ongoing costs of the system might fall as a percentage of assets as the accounts grew over time. The ongoing costs of an individual account system would involve two major elements: the cost of managing and maintaining the accounts (that is, record keeping costs) and the costs associated with investing funds. Concerns have been raised about the amount of funds that would be held in the individual accounts and how transaction and administrative costs would affect them. It has been noted that the account balances for many individuals could be quite small and that there could be a large number of rather small transactions. These factors could make it costly for private institutions to maintain the accounts. If the administrative and transaction costs were charged to individual account holders, they could greatly reduce, or even eliminate, any gains small accounts might otherwise receive. This could be one area in which a government-managed individual account plan might have an advantage. The government would be able to collect deposits through the existing payroll tax collection system and perhaps reduce transaction costs. However, it is not certain that the size and number of individual accounts would be a significant problem for the private sector, which already manages 401(k) plans that are similar to the individual account plans proposed. Another issue concerns the specification of investment alternatives for the accounts. Under a privately managed system of individual accounts, individuals or employers might contract directly with financial institutions. This could mean a wide array of investment choices for individuals and, at the same time, a wide variation in potential financial outcomes. Some individuals might not be familiar with basic investing strategies, and they would have to sift through a potentially large amount of information that financial institutions sent them as they competed for clients. Expanding investment education for workers has been suggested as one way to address this concern. However, who would provide this education is an open question. As already noted, financial institutions would incur costs in managing the accounts and would charge fees as part of making transactions. Data suggest that administrative expenses are higher in mutual funds that are more actively managed, whereas funds that are more passively managed—such as index funds, which tend to make fewer transactions—have substantially lower costs. The extent to which an individual account system would result in large transaction fees, as has been the experience in the early phases of the Chilean privatized system, is unclear. In estimating outcomes for the Advisory Council proposals, assumptions were made about relative administrative costs. Both the IA and PSA proposals assume that at least a part of the activities of the current Social Security program would continue after the new advanced funding mechanisms were in place. Thus, much of the costs of the current program would be retained. The proposal that recommended the larger individual accounts was estimated to have new administrative costs that would be considerably higher than more limited individual accounts. Implementing private account systems would also raise questions about the management of financial flows and how the individual investor might be protected, both of which relate to the role of monitoring and regulation of private account systems. Individual accounts could be maintained under the auspices of the government, and the financial flows would not need to change significantly. Employers could deposit the required contributions directly with the Treasury, and SSA could make the appropriate distributions to the individual accounts. SSA and the Treasury would have to arrange procedures for allocating the funds to accounts. If the system was run much like the government’s Thrift Savings Plan (TSP) for federal workers, the government would contract with a financial institution to manage several funds. However, given the size of the contributions involved, it would probably not be wise or feasible to have only a few institutions manage these assets. Thus, it would probably be the case that the range of institutions and investments would have to be expanded. As the number of financial institutions participating expanded, so would the administrative complexity. Arguably, a system of direct deposits from employers to financial institutions might be feasible and efficient. However, it is unclear whether a private system might still be more costly than funneling the funds through SSA, in part because a centralized operation might more efficiently handle such functions. Individual account systems could require substantial monitoring, as would any system of financial transactions. For example, transferring funds between employers and financial firms creates opportunities for fraud. Private pension plans are covered under the Employee Retirement Income Security Act of 1974 (ERISA), which provides a broad framework of pension law that includes codification of fiduciary responsibilities for handling pension assets, disclosure to plan participants, and other provisions aimed at protecting workers’ benefit rights. It is not currently clear whether individual account plans would need ERISA-like provisions, although many of ERISA’s provisions might prove useful in protecting individual account assets. Certain monitoring and regulatory concerns—such as those regarding the provision of investment advice and financial education for investors—would need to be addressed. Under a government-managed individual account plan, handling the accounts through the Treasury and SSA might require few additions to the current regulatory apparatus. However, under individually managed individual account plans, new or expanded monitoring and regulatory functions might be necessary. These functions would affect the cost of implementing the new individual account systems. Evidence suggests that regulatory requirements have added significantly to the cost of private pensions. While administrative issues are not necessarily decisive criteria for determining whether a new Social Security system should be implemented, they do represent an important consideration as reforms are debated. Some evidence suggests advantages from a centralized approach based on the Social Security model, but other evidence suggests that relying on individuals and their brokers has advantages in terms of efficiency and service to the participant. This aspect of the reform debate requires careful scrutiny and additional attention. Decisions to increase the advance funding of the Social Security system, whether or not accompanied by individual accounts, could have significant consequences for the federal budget. Changes to the status of the federal budget, in turn, could have implications for the level of national saving and future economic growth. Advance funding could be done either through the public or private sector, although advocates of privately held individual accounts believe that funding through private institutions is more likely to lead to capital formation and enhanced economic growth. Regardless of whether the advance funding was done through the public or private sector, the cost of the transition to the new system would need to be addressed, and the way the transition was accomplished could determine the impact of the shift to advance funding on national saving. Social Security’s current financing structure and the Trust Funds have important interactions with the federal budget and government finance. The status of the federal budget, in turn, can affect national saving and future economic growth, which could determine the ability of future workers to provide for their own retirements and for beneficiaries. The Social Security Trust Funds were designed to maintain a short-term contingency reserve, not to provide advance funding for future obligations. Amendments to the Social Security law in 1977 and 1983 have allowed the Trust Funds to accumulate a reserve beyond what is considered necessary to meet contingencies. This reserve, however, is still well below what would be needed for full advance funding. The Trust Funds’ excess cash revenues are, by law, invested in U.S. Treasury securities. In effect, these revenues are loaned to the Treasury, reducing the Treasury’s need to borrow from other sources to finance non-Social Security federal spending. The Social Security cash surplus is expected to remain at about $50 billion annually for another decade, after which the surpluses will get smaller. Without changes to current policy, the program’s cash surpluses are expected to disappear in 2013. To cover the subsequent annual cash shortfall, the Trust Funds will begin drawing on the Treasury, first relying on its interest income and, eventually, on its assets. This will have a direct and increasingly negative impact on the federal budget. By around 2032, the Trust Funds will be effectively exhausted—at that time, without government action, program revenues will pay only about 75 percent of total benefits. While the Trust Funds’ Treasury securities are assets of the Social Security program, they are also liabilities for the rest of the federal government that, when redeemed, will have to be financed by raising taxes, borrowing from the public, or reducing other federal expenditures. Thus, not only will the government no longer have access to Social Security’s surplus, but the need to cover the system’s cash shortfall could force difficult budget and tax decisions in the non-Social Security portion of the budget. The realization that there will be relatively fewer workers in the future to produce the goods and services to support not only themselves, but also a larger number of retirees, has led many to focus on the potential contribution of Social Security financing reform to long-term economic growth. Future national income and output depend on, among other things, the level of capital stock available. Capital accumulation, in turn, depends on national saving that can be used for investment. National saving is composed of personal saving by individuals, business saving (undistributed profits), and government saving. When the government runs deficits, it subtracts from national saving. National saving rates in recent years have been at historically low levels. A purely pay-as-you-go system has little, if any, direct effect on saving. The current Social Security system is running cash surpluses that reduce the size of the unified budget deficit and, all else being equal, should increase national saving. However, to the degree that the existence of the Social Security surpluses undermines fiscal discipline elsewhere in the budget, the potential positive effect on national saving is mitigated. If the non-Social Security part of the budget were balanced, the buildup in the Trust Funds would mean positive government saving and could result in larger national saving. These resources would be available for investment and could, presumably, enhance economic growth. Moreover, a larger economy could lighten the future burden of maintaining Social Security.Higher rates of economic growth would mean higher real wages and living standards, and future workers, even if they had to pay higher payroll tax rates to maintain benefit levels, would be in a better position to do so. The economic importance of advance funding is that it could foster saving. These savings would then be available for capital formation. Retirement programs, such as pensions, are essentially savings for long-term capital formation. Pensions transfer the portion of current income that is not consumed today into income that will be consumed in retirement. In an advance funded pension arrangement, the savings put into pension funds provide capital for business investment, and the returns generated accrue both to the businesses that invested the funds and to the individuals saving for their retirement. Thus, the return available to pension savers is related to the real growth of the economy, and pension saving provides an important basis for capital formation and economic growth. One of the objections to pay-as-you-go financing is that it is mainly a tax-transfer mechanism that extracts resources from current workers and redistributes them to current retirees and has no direct impact on saving.In order for the government to save and contribute to capital formation, it must extract resources from the economy and either invest them productively on its own or use them in a way that frees up other (private) resources for investment. Should policymakers choose to increase advance funding through the current Social Security program structure, the Trust Funds could continue to invest rising surpluses in Treasury securities. Under such policies, the federal government could use this capital to retire outstanding debt held by the public, thus freeing up resources to be invested in private sector capital, or it could undertake “public investment,” such as building or maintaining infrastructures, which could provide economic benefits that improve efficiency in other areas of the economy. Alternatively, Social Security Trust Fund investment policies could be altered to permit investing surplus funds outside the federal government, such as in the stock market. The various uses of surplus Social Security funds could have different impacts on national saving. When the government runs a budget surplus, resources have been taken out of the economy. If these resources were used to retire outstanding public debt, instead of to fund other government programs, some of the resources of investors who had purchased the debt would be freed up. To the extent that these funds were reinvested, the government would have increased private investment, which, in turn, creates the potential for higher economic growth. The ability of the government to retire debt would depend on congressional spending decisions. It is important to note that the annual Social Security surpluses themselves represent only a small fraction of the future unfunded promises of Social Security. To advance fund all these promises would require running much larger annual Social Security surpluses. Increasing public capital investment would require budgetary actions. Actual investments that could contribute to economic growth would have to be identified, and funds for them would have to be allocated in the budget. The traditional concern with public capital investments is that political processes introduce considerations other than purely economic returns into the decision-making process. Some believe such considerations can be used to impart a “social return” to a particular allocation of resources, which many view as highly desirable. However, there is disagreement about this, and others hold that such decisions may be less subject to the discipline of market forces and, hence, undermine rather than enhance economic efficiency and capital formation. A third way for the government to engage in capital formation would be to invest the Trust Funds’ assets in private securities. This would create the potential for larger Trust Funds, which could then earn higher returns, further improving the program’s solvency. The contribution of such a proposal to capital formation would be contingent on a number of factors. If the non-Social Security portion of the unified budget was in deficit, such an action would be unlikely to change national saving. The purchase of stocks could result in an equivalent issue of government bonds to provide substitute financing for the payroll tax revenues that would have been used concurrently to finance government expenditures. The most likely way for this proposal to represent funding that would lead to higher saving and capital formation would be in the context of a budget surplus, particularly one that had arisen from a balanced non-Social Security budget. But even if there were a budget surplus, the proposal would generate only a small portion of the amount necessary to fully advance fund future Social Security benefits. Such a Trust Funds investment proposal would also raise questions about how a large, government-controlled fund would be managed and whether political considerations would be introduced into the management of the funds or the entities in which the funds were invested. Mechanisms could be designed to limit involvement of the political process in allocating investments; however, it would be likely that such involvement could not be completely precluded. Many analysts believe that advance funding of Social Security would increase the likelihood that the resources contributed to social insurance programs would result in increased capital formation. While conceptually this could occur regardless of whether the funding was done publicly or privately, in practice, there may be important differences between public and private saving and investment decision-making. There is a substantial body of thought that questions the ability of political institutions to ensure that the resources raised through Social Security, in fact, contribute to national saving and capital formation, and some suggest that funding through private institutions or individuals is more likely to lead to increased saving and capital formation than funding through public institutions. The argument for private funding is essentially based on the notion that private markets allocate capital efficiently. The main motivation underlying private market decisions regarding investment is the generation of profit, or return. The discipline that the profit motive places on markets is key to the efficient allocation of capital. It is generally agreed that well-functioning, efficient markets are fundamental for healthy economic growth. Proponents of the annuity-welfare model believe that funding through private institutions would enhance the likelihood that resources devoted to providing for retirement income would lead to increased saving. Private saving, especially for retirement, involves legal arrangements that explicitly recognize ownership of, or benefit rights to, contributed resources. In the private pension field, such arrangements are backed by legal fiduciary restrictions and guidelines such as those that attempt to preclude noninvestment uses of saved resources. Advance funding Social Security would require that sufficient resources be allocated to generate a future expected benefit. Thus, when sufficient resources were allocated in advance and invested efficiently and productively in real assets, the likelihood that these resources would represent saving, contribute to capital formation, and generate economic returns would be maximized. While advance funding of retirement benefits is viewed as having economic advantages over pay-as-you-go financing, the choice between public and private institutions hinges on judgments about both the role of government and the relative weight given to the adequacy and equity goals. While increased advance funding of Social Security by the government could potentially have a positive impact on national saving, the impact would depend in part on what happened in the non-Social Security part of the budget. Advance funding through individual accounts could also have a limited initial impact on national saving, depending on how the transition was financed. If the transition was financed through more borrowing from the public, then the impact on national saving would be reduced. The transition to full advance funding could mean that one generation of workers would face a potentially staggering payroll tax rate. While this might result in an eventual rise in saving, workers’ consumption would significantly drop in the near term, which could negatively affect the performance of the economy for a considerable period of time. In addition, accumulating a significantly larger stock of capital could have implications for financial markets, and the prices of securities, the return to capital, or both could be affected. Thus, the advantages of advance funding hinge on the likelihood that the higher saving would result in increased productive investment and future economic growth. If so, long-term increases in the standard of living might be deemed to be worth the disadvantage of reduced consumption in the near term. Incorporating a lesser degree of advance funding suggests a lower transition cost. As noted in chapter 4, if the current program were terminated and a new fully funded one were started, the amount necessary to pay the accrued benefits of current workers and current beneficiaries would be $9 trillion. These unfinanced costs of the current system are the transition costs of moving to a fully funded system. The question is when, not whether, such costs will be addressed. Alternatives to paying for the full transition now include benefit reductions, tax increases, or borrowing. Extending the period of time over which such costs must be met would spread the burden over several generations. Social Security has provided the basis on which most Americans have built their retirement incomes for nearly 60 years. The program has been highly effective at reducing the incidence of poverty among the elderly, and the disability and survivor benefits have been critical to the financial well-being of millions of others. While the economy’s recent performance has extended the projected life of the Social Security Trust Funds, there is general agreement that Social Security’s revenues eventually will be inadequate to pay all promised benefits. The nation is now engaged in a debate about how best to ensure the long-term solvency of the program. A number of proposals have been put forward and, while they share the goal of restoring solvency, they contain significant differences reflecting alternative perspectives as to the appropriate structure of Social Security in the 21st century. The approach chosen by decisionmakers will affect nearly every American’s retirement income and could be critically important to the economic welfare of many, especially those relying on survivor and disability benefits. Moreover, the way we choose to address the financing issue also could have important implications for the long-term performance of the national economy. Many elements of the debate that surrounded the creation of the program in the 1930s are resurfacing today. The proposals that are being advanced not only address the relatively narrow question of how to restore solvency but also go to the larger question of what role Social Security and the federal government should play in providing retirement income. The proposed reforms all include both individual equity and income adequacy goals, but the balance struck between them differs widely. Today’s social and economic environment is very different from what prevailed in the 1930s when Social Security was enacted. Social Security originally was designed to replace a portion of earnings lost because of retirement or unforeseen circumstances. However, Social Security is now viewed by many as the most significant source of retirement income, and for many it is their only source. Because Social Security provides a lifetime annuity that is indexed for inflation, it becomes an increasingly important source as retirees grow older and exhaust other income sources. Supporters of the existing program argue that Social Security’s financing problems could be addressed without changing the current structure of the program. A combination of revenue increases and benefit reductions, similar to those that have been used in the past to preserve solvency, equal to about 2.19 percent of taxable payroll would be sufficient to restore long-term actuarial balance over the next 75 years. In addition, some supporters of maintaining the existing structure propose to invest a portion of the Social Security Trust Funds in the stock market to improve the flow of revenues. Our analysis shows that there are a number of adjustments that, in combination, could restore long-term balance while leaving the structure basically intact. Those who seek fundamental changes to the system do not believe that a sustainable solution to the financing problems can be found within the current structure of the program. They argue that any restoration of actuarial balance within the current pay-as-you-go structure will be short-lived, as demographic trends continue to cause future revenues to fall short of future expenditures. Maintaining the current system, they assert, would thus require periodic increases in revenues, reductions in benefits, or both. Those supporting fundamental change generally call for replacing the primarily pay-as-you-go system with one that relies more heavily on advance funding and replacing, at least in part, the centralized Trust Funds with individual accounts that are owned and managed by the program participants. These accounts could be invested in securities that offered the potential for higher rates of return than the implicit rate of return earned on Social Security contributions. Those advocating fundamental changes rely on historical stock market performance to support their view that the increased risks associated with individual accounts are unlikely to outweigh the benefits. Moving even part of Social Security to individual accounts would raise many questions and challenges. While individual accounts offer the potential benefits of higher returns, they also expose individuals to risks now borne collectively through the government. The nature of these risks and their potential impacts on different groups of individuals, such as low earners, would need to be carefully considered. It would also be important to consider how important ancillary benefits, such as disability and dependents’ benefits, would be treated and how other sources of retirement income might be affected under a restructured Social Security program. Moreover, moving to an alternative program structure that included advance funded individual accounts would require a decision regarding how best to finance the transition costs. Funding this transition would require either supplementary taxes on current generations—asking them in effect to “pay twice”—or a substantial increase in government debt. Further, a host of program design, administrative, and oversight issues would need to be addressed. The costs of implementing the new program design and its administrative requirements could offset some of the advantages of higher investment returns associated with individual accounts. Another key element is the relative impact of different program financing structures on aggregate saving and the national economy. Saving is critical to the economy’s long-term growth, and a larger economy in the future would help ease the burden of meeting retirement costs while sustaining rising standards of living. Advocates of moving toward a system of individual accounts argue that such a system would increase the nation’s saving rate, although the substantial transition costs associated with these proposals offset the positive effects on saving in the short and medium term, pushing positive economic effects even further into the future. Raising saving is only one of several important goals addressed in Social Security financing reform proposals. But because saving is so important to societal goals, proposals that have the potential to encourage saving should be carefully considered. While the debate continues over whether the existing system should be maintained or whether fundamental restructuring is desirable, there is broad consensus that action is needed soon to dilute the impact of the changes and to give workers and their families time to adapt to them. Nonetheless, because such action will affect the nation and its economy for years to come, decisions should be made with full knowledge and debate of the trade-offs inherent in each proposed change.
Pursuant to a congressional request, GAO provided information on issues related to social security financing, focusing on: (1) the various perspectives that underlie the current solvency debate; (2) the reform options within the current program structure; (3) the issues that might arise if social security were restructured to include individual retirement accounts; and (4) the likely impacts on national saving of reform proposals that call for changes in how Social Security benefits are funded. GAO noted that: (1) many options exist for restoring long-term solvency within the current program structure; (2) these possibilities include raising the retirement age, altering the benefit formula, reducing the cost-of-living adjustment, investing Social Security Trust Fund surpluses in the stock market, and mandating participation of workers who are currently excluded; (3) some combinations of these changes could restore program solvency while retaining the program's social insurance features; (4) while these options generally require reducing benefits or raising revenues, their effects on workers and retirees might be mitigated if adjustments were made sooner, not later; (5) proposals for more fundamental program changes have the potential to increase returns overall but would entail increased risk; (6) moving even part of social security to individual accounts would require careful consideration of the issues raised by such a fundamental change; (7) the consequences for the insurance aspects of the current social security system would require close scrutiny if social security were wholly or partly privatized; (8) most of the reform proposals envision substituting advance funding for the largely pay-as-you-go system that exists today; (9) in principle, advance funding of social security benefits could lead to an increase in national saving; (10) increased saving could lead to higher rates of economic growth and better enable future generations to support themselves and future retirees; (11) moving to an advance funded system would entail substantial transition costs that could offset any potential savings for a number of years; (12) over the years, social security has evolved to be more than a retirement program; (13) social security not only provides the floor for an adequate retirement income, it also insures families in the event of the death or disability of the earner and helps provide retirement income security for low-income workers; (14) restoring the system to financial solvency will require fundamental choices about such issues as the strength of guarantees of retirement income to the nation's elderly, levels of insurance for working families, and the role of government in providing retirement income; and (15) because such decisions will affect the nation and its economy for years to come, they should be made with full knowledge and debate of the trade-offs inherent in each proposed change.
Each year over 100 utility-owned nuclear power plants and thousands of commercial enterprises, such as pharmaceutical manufacturers, hospitals, universities, and industrial firms, generate various types of radioactive contaminated waste. While waste in the form of used (spent) fuel from nuclear power plants is classified as “high-level” because of the amount of radioactivity in the fuel, almost all other commercial waste is designated as “low-level” because the levels of radioactivity in these wastes are relatively lower. Low-level radioactive waste items include such things as rags, paper, liquid, glass, protective clothing, as well as hardware, equipment, and resins exposed to radioactivity or contaminated with radioactive material at nuclear power plants. In 1993, operations at utilities’ nuclear power plants accounted for about 50 percent of the volume of commercially generated low-level radioactive waste, but this volume contained about 95 percent of the radioactivity in low-level waste. Examples of other commercial uses of radioactive materials that either directly or indirectly produce low-level radioactive waste include the following: Medical procedures involving radiation or radioactive material. More than 100 million of these procedures are performed each year. Testing and development of about 80 percent of new drugs. Sterilization of consumer products, such as cosmetics, hair products, and contact lens solutions using radioactive materials. Production of consumer products, such as smoke detectors, and industrial products, such as instruments to inspect for defects in highways, pipelines, and aircraft. The radioactivity in most commercially generated low-level waste decays to safe levels within 100 years, but some waste remains hazardous for longer than 500 years. Because these wastes are potentially harmful to workers, the general public, and the environment, they must be stored and disposed of safely. Throughout the 1980s and the early 1990s, commercially generated low-level waste was routinely disposed of in three facilities at or near Beatty, Nevada; Barnwell, South Carolina; and Richland, Washington. However, Nevada closed its facility on January 1, 1993. The facility in Washington was closed to generators in all but 11 states on January 1, 1993, and on July 1, 1994, South Carolina closed its facility to waste generators in all but 8 southeastern states. The generation of significant amounts of nuclear wastes began during World War II and because nuclear operations then and for years afterward were controlled by the federal government, the government assumed responsibility for the disposal of these wastes. Eventually, however, the federal Atomic Energy Commission began permitting commercial entities to possess, own, and use radioactive materials and to dispose of low-level waste. With the increase in commercial uses of radioactive materials, the Congress, in 1959, authorized the Commission to transfer to states authority and responsibility for regulating most commercial users other than nuclear power plants. States that desired to assume such authority and responsibility could do so by establishing regulatory programs that were adequate to protect the public health and safety and compatible with the Commission’s regulatory program. Such states are referred to as agreement states. With increased commercial use of radioactive materials and an expanding regulatory role for states, private companies, rather than the federal government, began to provide disposal facilities for commercially generated low-level waste. By 1971 there were six privately operated disposal facilities located in Illinois, Kentucky, Nevada, New York, South Carolina, and the state of Washington. All of these disposal facilities except the facility in Illinois were regulated by agreement states. Only the facility in Washington was developed on federal land; specifically, on the Hanford Reservation, now managed by the Department of Energy (DOE). (Figs. 1.1 and 1.2 show the disposal facility in Barnwell, South Carolina.) By March 1979 the disposal facilities in Illinois, Kentucky, and New York had been closed for a variety of reasons, including leakage at the sites. Then, in July 1979, the governor of Nevada ordered the Beatty facility shutdown after two incidents involving trucks carrying radioactive waste into the facility. Thereafter, the governors of Nevada, South Carolina, and Washington wrote to the Nuclear Regulatory Commission (NRC) for assurance that rules governing shipments would be enforced. The Beatty facility reopened in late July 1979. In October 1979, the governor of Washington ordered that state’s disposal facility to shut down after deficiencies were found in waste shipments bound for the facility. Among other things, a truckload of radioactive cobalt was leaking. Also in 1979, the governor of South Carolina said that the state’s disposal facility was receiving up to 90 percent of all commercially generated low-level waste and that decontamination of the disabled Three Mile Island nuclear power plant would generate waste amounting to almost 50 percent of the total volume the state had received in 1978. For this reason, the governor said that South Carolina would not accept waste from the disabled plant. Concerned about the potential loss of disposal capacity, several congressional committees held hearings in 1979. Initially, the committees considered legislation that would make the federal government responsible for the disposal of commercially generated low-level waste. The governors of the three states with operating disposal facilities, however, opposed this approach because they wanted states to have an opportunity to examine alternatives to federal disposal. By the end of the year, Washington and Nevada had reopened their disposal facilities, and the Congress had deferred consideration of legislation to the next year. Subsequently, a task force convened by the National Governors’ Association recommended that responsibility for the disposal of low-level waste be assumed by the states. Other state government organizations supported this approach. Late in 1980, the Congress established a new policy regarding the disposal of commercially generated low-level waste by enacting the Low-Level Radioactive Waste Policy Act of 1980 (P.L. 96-573). The act made each state responsible for making disposal capacity available and stated that low-level radioactive waste can be most safely and efficiently managed on a regional basis. To implement this policy, the Congress encouraged states to form compacts to meet their collective disposal needs and to minimize the number of new disposal sites. Congressional consent was required for a compact to become effective. As an inducement to states to form compacts and develop regional disposal facilities, the act stated that compacts could, beginning January 1, 1986, restrict the use of their disposal facilities to wastes generated within their respective regions. The Congress expected states to have new disposal facilities capable of handling their own low-level waste by that date. Although nearly 40 states had formed seven regional compacts by the end of 1983, it had become clear that no new disposal facilities would be ready for at least another 5 years. As a result, the Congress passed and, on January 15, 1986, the President signed into law, the Low-Level Radioactive Waste Policy Amendments Act of 1985 (P.L. 99-240). At the same time, the Congress granted consent to the seven regional compacts. The amendments represented a compromise for competing parties. On one side, waste generators in states that would be left without access to disposal facilities—generators that were relying on the existing disposal facilities in Nevada, South Carolina, and Washington—got a 7-year extension of the period during which they could ship waste to existing disposal facilities. On the other hand, these three states, which wanted to close their facilities to waste generators outside their respective compacts, received additional assurances that other states or compacts of states would develop their own disposal facilities. Among these additional assurances were six deadlines and milestones by which states should make decisions and commit to certain actions towards developing new disposal facilities. The amendments prescribed limited responsibilities for DOE and NRC. The amendments also established financial penalties, or surcharges, on the waste disposed of in existing facilities if certain milestones were not met. In addition to basic disposal charges, waste generators were to pay nonpenalty surcharges based on the volume of wastes disposed of at the three operating disposal facilities. The six deadlines and milestones are described in figure 1.3. New York and two of its counties challenged several provisions of the amendments, including the take-title provision contained in the last milestone. Nineteen other states supported this challenge. Under the take-title provision, states or compacts that failed to provide for the disposal of all waste generated within their borders by January 1, 1996, were required, upon request, to take title to and possession of the waste and become liable for damages suffered by the generators as a result of the state’s failure to do so. In 1992, the U.S. Supreme Court ruled in New York v. United States, 112 S.Ct. 2408 that this provision was unconstitutional. The court concluded that the Congress has power under the Constitution to preempt state regulation or to encourage states to provide suggested regulatory systems for disposal of the low-level waste generated within their borders, but the Constitution does not confer upon the Congress the ability to compel the states to do so in a particular way. The court held that the take-title provision was severable from the remainder of the act. Concerned about the environmental and economic effects of implementing the Low-Level Radioactive Waste Policy Act of 1980, as amended, Senators Christopher J. Dodd and Joseph I. Lieberman requested that we review the status of the low-level waste program, the economic and environmental effects of the planned disposal facilities, and alternatives to the approach specified in the act, as amended. To respond to the requesters, we interviewed state officials and members of the Low-Level Radioactive Waste Forum—an association of representatives of states and compacts established to help implement the act; waste generators and their associations, other professional associations, environmental groups, and members of academia; representatives from citizens’ advisory groups and citizens groups that have opposed efforts by Connecticut, Nebraska, and Massachusetts to select sites for new disposal facilities; New York and North Carolina county officials in communities close to where sites have been considered; and officials in DOE, NRC, and the Environmental Protection Agency (EPA) who are responsible for issues in the commercially generated low-level waste area. In addition, we obtained and analyzed available documentation on the subject area and attended various meetings sponsored by the Low-Level Radioactive Waste Forum, EPA, NRC, and the National Institutes of Standards and Technology. We also obtained and analyzed reports prepared by a presidential task force, DOE, NRC, states, environmental organizations, and waste generators and their associations. We reviewed law review articles and various articles and books from academic sources and professional associations. And, we hosted a meeting of representatives of low-level waste generator organizations from six states and compacts. We visited several facilities to obtain information about waste generation, storage, treatment, and disposal. We visited waste storage and processing facilities at the National Institutes of Health in Bethesda, Maryland; a research hospital in Pennsylvania; a research hospital, pharmaceutical manufacturer, and a nuclear power plant in Illinois; and a biotechnology research firm in California. We also visited the operating disposal facility at Barnwell, South Carolina, and a waste treatment facility in Tennessee. Finally, to assess pertinent economic issues, we examined reports prepared by DOE contractors, NRC, and members of academia on the economics of disposing of low-level waste. Although these reports did not address economic issues related to states’ specific plans for developing disposal facilities, they did provide general information on topics such as the economic effects of developing varying numbers and sizes of disposal facilities. We did not independently verify the cost data in these reports, and comparable economic studies were not available from states. To ensure that our report is accurate, complete, and objective, we provided copies of the draft report or portions of the draft report to knowledgeable federal officials, including the program manager for DOE’s National Low-Level Waste Management Office and NRC staff in the Office of State Programs, Division of Waste Management, Office of Nuclear Materials Safety and Safeguards, and Office of the General Counsel. These officials generally agreed with the facts as presented in our report, and NRC officials noted that our report accurately characterized the current situation in developing low-level waste disposal facilities. NRC and DOE officials also provided several technical and editorial comments which we incorporated as appropriate to clarify and update the report. Our work was performed from January 1993 through April 1995 in accordance with generally accepted government auditing standards. As of January 1995, 11 states had plans to develop disposal facilities for commercially generated low-level waste, and the state of Washington planned to continue operating its existing disposal facility. Altogether, these 12 facilities would serve waste generators in 47 states. Five other states had no plans to meet the needs of their waste generators. Only 4 of the 11 states have selected candidate sites for disposal facilities; and none of these proposed facilities is under construction. States’ estimated dates for opening the planned facilities range from 1997 to 2002, but these dates may be optimistic. The length of time states are taking to establish new disposal facilities is largely attributable to the controversial nature of nuclear waste disposal. Because existing facilities had closed to most states and new facilities will not be built for some time, waste generators in 33 states, which generate about 42 percent of the waste, have not had access to disposal facilities since July 1, 1994. These waste generators will have to store their own wastes until new disposal facilities are built. Forty-two states have established nine compacts. The Northwest and Rocky Mountain Compacts, comprising 11 states, intend to use Washington’s existing disposal facility. The Southeast Compact of eight states plans to develop a disposal facility in North Carolina and to close the Barnwell, South Carolina, disposal facility, which is currently available for only those eight states. And, six other compacts plan to develop seven new disposal facilities. (The two states that comprise the Northeast Compact—Connecticut and New Jersey—each plan to develop its own facility). Three other states have formed a tenth compact, the Texas Compact, that has not yet been approved by the Congress. This proposed compact also plans to develop a disposal facility in Texas. Finally, two states, Massachusetts and New York, are not members of compacts, and they intend to develop their own disposal facilities. Thus, 11 new disposal facilities are planned, and 1 existing facility would remain open for a total of 12 disposal facilities. Only four compacts, however, have selected candidate sites for their respective facilities, and no new disposal facility is yet under construction. Figure 2.1 shows the volume of waste disposed of by waste generators in each compact and unaffiliated state from 1991 through 1993 and the membership of each compact. No state has developed a new facility for disposal of commercially generated low-level radioactive waste since the 1980 act was passed. Current estimated dates for opening the 11 planned facilities range from 1997 to 2002. These dates, however, may be optimistic because earlier dates have slipped over the years. Also, some states that once appeared to be making the most progress, such as Illinois, are now further behind other states because of setbacks in their efforts to select a site for a disposal facility. Figure 2.2 shows how state and compact estimates of completion dates changed between 1991 and 1995. Three compacts totaling 19 states continue to be served by the existing disposal facilities in South Carolina and Washington. (See fig. 2.2.) Since July 1, 1994, when the South Carolina facility closed to waste generators outside the Southeast Compact, generators in the remaining 33 states have not had access to disposal facilities. (See fig. 2.3.) In fact, the states and compacts with jurisdiction for the South Carolina and Washington facilities began denying waste generators in some states, such as Michigan, New Hampshire, Puerto Rico, and Rhode Island, access to the existing disposal facilities prior to 1994. The denials were made on the basis that those states had not demonstrated sufficient progress in either joining other compacts of states or developing their own disposal facilities. Waste generators that do not have access to disposal facilities accounted for about 42 percent of all commercially generated low-level waste in 1993, the last full year that waste generators in most states had access to a disposal facility. The waste generators will have to treat and/or store their low-level wastes until their respective states develop new disposal facilities or obtain access to other facilities. California, Nebraska, North Carolina, and Texas are the host states for new disposal facilities for three compacts and a proposed compact made up of a total of 20 states. Waste generators in these 20 states account for about 43 percent of all commercially generated low-level waste. Developers of potential disposal facilities in the four host states have submitted applications to state regulatory authorities to construct and operate their facilities. The developer for a potential facility in California submitted a license application in 1989, and the state has licensed the facility pending sale of the land to the state by the U.S. Department of the Interior. In 1990, the developer for the Nebraska facility submitted a license application and then revised the application in 1993. The developers in North Carolina and Texas submitted final license applications for state reviews in 1993. None of the host states for other compacts or Massachusetts and New York have identified candidate sites for disposal facilities. The limited progress states have made in developing new facilities for disposing of commercially generated low-level waste appears to be fundamentally due to the controversial nature of such facilities. Put another way, the length of time required to form compacts, select states to host new facilities, develop necessary legislation and regulations, and select candidate sites for facilities appears to reflect the widespread concern about such facilities among the affected public and various state and local government entities. Early in 1993, NRC’s staff reviewed the experiences of 13 states in addressing the needs of their waste generators for access to disposal facilities. NRC’s staff identified seven factors that, in its judgment, had affected the progress of these states, including criteria and procedures for selecting sites, funding and legislation, litigation, perceptions that federal and state regulations were inadequate, perceptions that long-term storage of waste is more desirable than disposal, and liability protection for citizens and property from potential releases of radioactivity from a disposal facility. The staff said that the seventh factor—public and political concern over the development of new disposal facilities—appeared to be one of the major factors linked to many of the other factors. Public concern and an absence of broad-based public and political acceptance has had a significant effect on the development of new disposal facilities. Public concern, according to the staff, has been demonstrated in a variety of ways, including lack of volunteer sites for disposal facilities, delays in enacting necessary legislation, changes in states’ legislation affecting site-selection processes, strict site-selection regulations, and litigation. Moreover, according to the staff, public concern tends to increase and change as the site-selection process advances. The process of developing compacts and selecting a state within a compact to develop a disposal facility illustrates the difficulty at the political level of moving forward with a program for developing a disposal facility. In the early 1980s, 11 northeastern states were considering forming a regional compact. However, the compact never materialized because, according to observers, no state would agree to host a disposal facility for the large amount of waste that would be coming from the other states. Subsequently, the states splintered into smaller compacts, and several states decided to independently pursue their own waste disposal solutions, but none has selected a site for licensing. In an earlier report, we also pointed out that choosing sites for disposal facilities could be controversial and time-consuming. The process of selecting sites became longer than states had originally anticipated, in part, because of the extent of public involvement in these proceedings. The following discussion of the experiences of several states illustrates how the public and political concern over disposal facilities has affected the states’ abilities to develop new facilities. Because of questions about the process for selecting a new site for a disposal facility and concerns about the suitability of a proposed site, the governor of Illinois and the state’s legislature created an independent commission to examine the safety of the proposed site in 1989. In 1992, the commission found the site unacceptable, rejecting the conclusions of the state agency that had spent 8 years and about $85 million finding and studying the site. Since then, Illinois has abandoned the site and has embarked on a new approach which involves determining scientific requirements for the siting process followed by statewide screening to find a site. In 1991, citizen groups in Connecticut challenged the results of a statewide screening and selection process. Afterwards, the state enacted legislation that voided the site screening and selection results and directed the state’s siting authority to restart the site-selection process. The authority is now using a volunteer process to find a site that has been approved by the local electorate in a referendum. In 1988, during the screening process to find a suitable site in Nebraska, the developer received a formal expression of interest from several counties. The developer submitted a license application to the state agencies in July 1990, and the state declared the application complete and ready for technical review in December 1991. In January 1993, however, the state filed a lawsuit in the U.S. District Court for the District of Nebraska seeking a permanent injunction to prevent the licensing or construction of a facility in the state until community consent is demonstrated. In October 1993, the court granted summary judgment in favor of the defendants on procedural grounds. The court held that action on the community consent issue was barred by the statute of limitations provision in the compact. In June 1994, the U.S. Court of Appeals for the Eighth Circuit affirmed the lower court’s decision. The state’s petition to the Supreme Court to hear an appeal was denied in November 1994. Also in January 1993, Nebraska’s regulatory agency announced its intent to deny a license for the proposed disposal facility on the basis that the site contained wetlands. In October 1993, after the developer redesignated the boundaries of the site and eliminated the disputed wetlands area, the regulatory agency notified the developer that the agency would withdraw its intent to deny the license. The developer’s license application is currently under state review. In 1990, two candidate sites were selected in the host state of North Carolina. Subsequently, officials in the affected counties opposed the selection of the two candidate sites and filed two suits against the state’s siting authority. One suit claimed that an environmental impact statement was required before investigation of a site could begin. The other suit alleged that improper procedures were used in the site-selection process. In February 1993, the state court of appeals ruled in favor of the siting authority. The counties appealed to the state supreme court in March 1993 and, in November 1993, that court agreed to let stand the decision of the appeals court. Because of the pending lawsuits, the siting authority’s contractor, which was responsible for studying the sites, could do only preliminary, off-site testing. As a result, the siting authority did not select one of the two sites for use as a disposal facility until December 1993, or 3 years later than the siting authority had planned. Because the state called for further study of site features in 1994, the siting authority’s estimated date for licensing construction of the planned disposal facility has slipped from March 1995 until August 1997. In 1993, California officials had expected that their proposed disposal facility in the Mojave Desert for the Southwest Compact would be operating by 1994, but the controversy surrounding the siting effort has led to a later estimated opening. Besides lawsuits filed by opposition groups, a group of U.S. Geological Survey geologists, acting independently of their organization, prepared a report raising technical concerns about the site and the siting process. On the basis of the geologists’ report, a California Senator asked the President for a full hearing and an examination of alternatives for the site before the sale of federal land to the state. In 1994, the Secretary of the Interior asked the National Academy of Sciences to review the concerns of the Geological Survey geologists and to report back in May 1995. Depending upon the Academy’s findings, the Secretary may also want an adjudicatory hearing to examine opponents’ concerns. After the Academy has issued its report and, perhaps, an adjudicatory hearing has been held, the Secretary will determine whether the land will be transferred to the state. By October 1989, Michigan, the original host state for the Midwest Compact, had identified three candidate sites for a disposal facility but had then eliminated the three sites from further consideration because the sites did not meet its siting criteria. At a July 1991 meeting, Michigan presented several conditions for the compact to meet if it expected the state to continue its siting efforts. One condition, for example, was that the state would be released from its role as host state if, under Michigan law, the state could not find a suitable site for a disposal facility. The compact decided that Michigan had unreasonable criteria that essentially precluded the state from finding a suitable site. The compact then voted to expel Michigan for not acting in good faith to honor a binding contractual obligation to find a waste disposal site in Michigan. Ohio has assumed the host-state responsibility and has begun to develop a process for selecting a site for a disposal facility. In 1989, a New York state commission selected five potential sites for low-level waste in Cortland and Allegany counties. The commission had intended to conduct initial on-site technical investigations of the five sites by late spring of 1990 and then select at least two of the sites for a more intensive, 1-year investigation. However, public protests—including civil disobedience during the commission’s attempts to gain access to the sites—and other objections from citizens and local governments caused the governor to request the commission to defer on-site work until a new approach could be developed. The commission suspended its field work in April 1990, and later in 1990, the state amended its waste disposal act. In the meantime, Cortland County, where two of the five proposed sites are located, had questioned the commission’s credibility, in part, because the county contended that the commission did not follow its site-selection plan in selecting a volunteer site. In February 1990, the state joined the two potential host counties in filing suit against the federal government questioning the constitutionality of the Low-Level Waste Policy Act, as amended. These lawsuits led to the Supreme Court’s decision that the act’s take-title provision was unconstitutional. The state is currently trying to determine the best method for disposing of waste before deciding on a location for a disposal facility. There are no reliable estimates of the cost to dispose of the nation’s commercially generated low-level radioactive waste. In 1980, there were three operating disposal facilities serving almost four times the current volume of commercially generated low-level radioactive waste. Currently, 11 new facilities are planned in addition to the state of Washington’s existing facility. Most states have not estimated the total costs of their planned facilities or the unit disposal costs. Studies by DOE and others that examine economic aspects of low-level radioactive waste facilities have concluded that fewer larger new facilities could accommodate current waste volumes at less cost than a larger number of small facilities. The studies, however, have limited usefulness in determining the optimal number of sites. For example, no studies had up-to-date cost data, and the models that were used had limited scope and were not capable of estimating costs for the potential range of required disposal facility sizes. In addition, there are uncertainties related to the volume of commercially generated low-level waste that may be produced over the lifetime of the planned disposal facilities that were not accounted for in available studies of the economics of waste disposal. Two interrelated uncertainties are when utilities will retire their nuclear power plants and, once plants have been shut down, when they will be dismantled. Also, waste generators might, depending on the availability of disposal capacity and disposal fees, intensify past efforts to minimize the volume of waste that they must manage and eventually dispose of. Most states and compacts have not estimated what the total costs will be for their proposed facilities. State officials said that they are reluctant to provide such estimates because the different methods of calculating cost estimates that the states would use would lead to inaccurate comparisons of facility costs. For example, in determining the life-cycle cost—the full cost of the facility, including siting, development, construction, operating, closing, and post-closure monitoring—volume, and type of facility would play an important part. The unit cost of disposal at a small facility with above-ground concrete vaults to hold the waste would be higher than at a large facility that relied on shallow burial in earthen trenches. Also, each state and compact has different institutional and regulatory requirements, including liability funds. In 1993, NRC surveyed states and compacts to obtain cost information. Of the 11 potential host states, 5 provided life- cycle cost estimates—California, Massachusetts, Pennsylvania, Texas, and Vermont. The estimates ranged from $260 million in Texas to $920 million in Pennsylvania. In an April 1993 letter to NRC, the Low-Level Waste Radioactive Forum questioned the timing, methodology, accuracy, and usefulness of NRC’s study. The Forum was concerned that NRC’s presentation of the data could erroneously imply that data for the states were comparable and complete. Forum officials told us that the reasons for the wide variance in estimates may be based on factors such as the type of facility, accounting methods, definition of terms, and varying talents at estimating costs among the states. We identified and reviewed seven conceptual studies that examined the costs of disposing of commercially generated low-level radioactive waste.(See app. I for information on these seven studies.) All of these studies concluded that fewer, larger facilities would be more economically efficient than several smaller ones. The optimal number of facilities was between two and five. This finding was consistent even though the studies were produced at different times, employed different methodologies and cost estimates, and varied in their estimates of the optimum number of facilities. Moreover, the studies were limited to assumptions that the volume of waste would continue at the same rate for the life of the facilities. The volume may increase or decrease, depending, for example, on how and when nuclear power plants are dismantled. We were not able to develop a comparative cost analysis to demonstrate the relative efficiency of a wide range of plant sizes because we found no model with up-to-date cost data that was capable of estimating costs for the entire range of facility sizes required. Three of the studies prepared for DOE clearly demonstrate the economic benefits of consolidating small-volume facilities. The 1987 Conceptual Design Report, which examined large-scale facilities, found that increasing annual disposal capacity from an annual rate of 235,000 cubic feet to 350,000 cubic feet would reduce unit disposal costs by 25 to 50 percent. A 1991 report on small volume facilities concluded that unit costs rise radically as facility size decreases. The report estimated that costs were 143 percent higher for a disposal facility capable of annually accepting 10,000 cubic feet of waste than for a facility with three times this waste acceptance capacity. A 1993 study concluded that efforts to develop a cost-effective waste disposal facility should seek to match facility size as closely as possible to disposal waste demand and concentrate waste disposal activities at a small number of large sites. Disposal facilities that can handle high volumes of commercially generated low-level waste enjoy economies of scale because a significant portion of facility costs are fixed and do not vary with volume of disposal. These fixed costs can be spread over the high number of waste units received, thus lowering the per-unit cost of disposal. Because fixed costs are very significant in low-level waste disposal, a facility’s average costs decline markedly as facility size increases. (See table 3.1.) Also, a few large sites can reduce the fixed costs of identifying and licensing many small sites. Although there is agreement among the studies on the efficiency of fewer, larger facilities, only three of the seven studies we reviewed estimated the optimal number of sites. The estimates for the optimum number of sites ranged from two to five. In 1990, Bullard and Weger found that facilities designed to handle annual volumes between 200,000 and 500,000 cubic feet were most economically efficient. Using DOE’s projection of 933,000 cubic feet on average annually for the period 2000 to 2030, the number of economically efficient sites would be from two to five. In 1992, Coates, Heid, and Munger estimated the maximum number of economically viable sites to be five, while stating that a more realistic estimate would be two or three facilities. We were not able to develop a comparative cost analysis to demonstrate the relative efficiency of a wide range of facility sizes because all available cost studies contained either outdated data, which do not reflect current marketplace conditions, or limited scope for considering a narrow range of disposal facility sizes. However, all available cost information we collected point to a rapidly rising trend in major cost categories and most notably in pre-operating and siting costs. Cost information from North Carolina and Nebraska confirm that states are facing escalating costs. In 1989, North Carolina projected that pre-licensing costs would be $17.7 million; by 1991 the estimate had tripled to $51.1 million. Nebraska’s total cost estimates rose 231 percent from 1987 to 1992, from $36.9 million to $122.3 million. Also, NRC reported in its study that California’s cost estimate had increased by a factor of six. Although we did not attempt to verify specific cost data reported, we believe that the sources that were used are the best available on the economic trends faced by states and compacts under the program. There are uncertainties related to the volume of commercially generated low-level waste that may be produced over the lifetime of the planned disposal facilities that were not accounted for in available studies of the economics of waste disposal. Two uncertainties are when utilities will retire their nuclear power plants and when they will decontaminate and dismantle retired plants. In addition, waste generators might, depending on factors such as the availability of disposal capacity and fees charged for disposal services, continue past efforts to minimize the volume of waste that they must manage and eventually dispose of. Today, there are more than 100 civilian nuclear power plants in operation in about 30 states. In 1993, the operating plants collectively produced about 50 percent of the volume (and 95 percent of the radioactivity) of commercially generated low-level waste. Typically, NRC licenses these nuclear power plants to operate for 40 years, but many utilities are interested in extending the authorized operating lives of their plants by up to 20 years. Although NRC’s regulations permit such life extensions, no civilian nuclear power plant has yet operated for 40 years. Sixteen plants have been permanently shut down before operating that long. In the next 20 years, about 50 nuclear power plants will have to be retired unless their licenses are extended. No utility has yet submitted an application to extend its operating license, and since 1979 utilities have retired seven nuclear plants earlier than had originally been anticipated.For example, owners of the Yankee Rowe and the Monticello plants originally planned to submit applications to NRC for license extensions as part of a cooperative program between DOE and the nuclear power industry. However, in 1992, the utility that owns the Monticello plant indefinitely deferred its application for a number of reasons, such as increases in estimated costs of upgrading to new equipment standards, and DOE’s inability to accept spent fuel from the plant for storage or disposal. Then, in 1992, the owner of the Yankee Rowe plant decided to retire that plant for economic reasons. Thus, future decisions on when to retire civilian nuclear power plants from service, including the possibility of extending the operating lives of these plants, will affect the volumes of commercially generated low-level waste that must be disposed of over the next several decades. The state of Pennsylvania, for example, has estimated that extending, by 20 years, the operating lives of the 12 nuclear power plants located in states that make up the Appalachian Compact could produce an additional 3.3 million cubic feet of low-level waste through the first quarter of the next century. In addition to the low-level waste that civilian nuclear power plants produce during their operating periods, many components of the plants become contaminated with radioactivity as a result of years of plant operations. For this reason, plants that have been retired from service must be decommissioned. Decommissioning refers to safely removing a nuclear plant from service, reducing residual radioactivity to a level that permits release of the plant property for unrestricted use, and terminating the utility’s license for the plant. NRC requires a utility to submit a plan for decommissioning a nuclear power plant within 2 years of the time that the utility retires the plant from service. Although specific decommissioning plans may vary from plant to plant, NRC generally requires that a utility complete decommissioning within 60 years of the plant’s retirement. To meet the 60-year requirement, utilities may either dismantle and/or decontaminate portions of a plant that contain radioactive contaminants shortly after retirement or allow the radioactive contaminants to decay over a period of years prior to decontamination and/or dismantlement. Thus, decisions on when to decontaminate and dismantle retired plants will affect the waste volume just as decisions on when to retire nuclear power plants will affect the volume. DOE estimates that decommissioning and decontaminating the nuclear power plants that utilities will shut down over the next 30 years will generate about 55 million cubic feet of low-level waste. DOE assumed a 2-year planning period after a plant has been permanently shut down followed by a 4-year decontamination period. Either more or less waste than estimated, however, could be generated and disposed of at new disposal facilities, depending on the timing of decommissioning and decontamination of these plants. If utilities decontaminate and dismantle more nuclear plants over the next 30 years than projected, they could generate even more low-level waste. Even if nuclear power plants are not decommissioned and decontaminated immediately, there may be a sizable amount of waste generated to keep them operating. If all nuclear plants in the Appalachian Compact received 20-year license renewals, for example, Pennsylvania officials estimated that 3.3 million cubic feet of waste would be generated for the same period. The future trend of waste volume depends on several uncertainties. Among other things, the trend may depend on the economics of storage and disposal and waste minimization techniques. In the initial years (1963 to 1971) of commercially generated low-level waste disposal, the volume of waste and the number of sites increased. As the number of disposal facilities declined, the volume of disposed waste continued to increase, until the Low-Level Waste Policy Act of 1980 was enacted. (See fig. 3.1.) Since 1980, the volume has decreased. This reversal has been attributed, in large part, to the 1980 act, as amended; decisions by states with existing disposal facilities to charge higher disposal fees; and limits on the volume of waste that could be disposed of in their facilities. Industry representatives and state and federal officials that we talked with differed on whether further significant reductions will occur in the volume of commercially generated low-level radioactive waste that must be disposed of. Some of the officials said that uncertainties in the costs of storage and disposal could eventually lead to reduced volume through new or additional treatment that would not necessarily reduce radioactivity. Others said that the uncertainties could lead to reduced usage of radioactive materials, particularly among smaller generators. For larger generators, such as utilities, storage and disposal costs are not expected to be as important. According to the Office of Technology Assessment, even with higher anticipated disposal costs, low-level waste costs would average about 1 percent of the utility’s operational costs. No studies have been conducted on the combined environmental effects of the number of planned disposal facilities. Also, because no new disposal facilities have been built, little is known about the specific environmental effects at most of the planned facilities. With waste generators in most states now storing their own wastes and no new disposal facilities available, the environmental risks of long-term storage may increase as the amount of waste increases and reaches generators’ current capacities. Currently, no studies have been conducted of the overall environmental effects of the 1 existing and 11 planned disposal facilities for commercially generated low-level waste. Furthermore, there are opposing views on whether having more disposal facilities than in the past will increase the environmental risks. Because of past problems at disposal facilities, representatives of national groups opposed to nuclear activities and some local opponents of states’ efforts to find sites for disposal facilities question whether the waste can be safely disposed of. Several former disposal facilities experienced environmental problems, such as radionuclides leaking into groundwater. However, several state officials and generators said that new disposal facilities would not encounter such problems because the land disposal regulations developed by NRC in 1982 include, among other things, stricter requirements for investigating sites and building and operating facilities. In addition, NRC officials pointed out that each new disposal facility would have to comply with these regulations, including limits on the dose of radiation that a member of the public could receive each year from operation of the facility. (App. II provides a brief description of NRC’s standard for allowable radioactive risk to the public and EPA’s current concern about NRC’s standard.) NRC officials also said that the environmental impact statement that NRC prepared for the purpose of developing its disposal regulations assessed, in general terms, potential environmental effects, such as air quality, energy use, and social impacts. Because of regulatory requirements for a buffer zone of land surrounding a disposal facility for commercially generated low-level waste, developing the 11 planned facilities may require more land dedicated to disposal than would fewer larger facilities. The acreage dedicated to such facilities, including buffer zones, will require monitoring and limited land-use applications for at least a century. Furthermore, unless the currently planned facilities can expand their operating lives, there may be a need to establish more sites in 20 to 30 years. For example, the Southwest Compact Agreement states that, if California decides to close its facility after an operating life of 30 years, another state in the compact will become the host of another disposal facility for another 30 years. According to some state officials and waste generators, however, having several disposal sites could have positive effects on public health and safety by reducing distances from generators to processors and to disposal facilities and, therefore, reducing the chances of transportation accidents. Estimating potential transportation benefits may be difficult, because of the many factors, such as road conditions, weather, driver error, and type of vehicle, that contribute to accidents. In addition, many generators use various waste brokers and processors in different parts of the nation for temporary storage, packaging, and treatment of waste before sending it to disposal facilities, which could also affect transportation distances. Proponents of new disposal facilities also point out that the transportation of waste has never created a grave environmental or safety risk. According to DOE, 53 transportation accidents involving low-level waste were reported in the 20-year period from 1971 to 1991. Four involved the release of radioactive waste, but no radiologically related death or injury occurred. (See fig. 4.1 for an example of how some types of low-level waste are transported.) Very little information exists on the potential environmental effects at most of the 11 planned disposal sites. California has licensed a facility, but environmental concerns remain unresolved. Nebraska, North Carolina, and Texas are currently reviewing license applications, including environmental impact statements. If the states find significant environmental concerns based on their reviews, the sites can be rejected. After 7 years of investigating the suitability of a site in Ward Valley, in the Mojave Desert, California found that the site and proposed facility met its regulatory requirements. The state’s findings, however, have been challenged on the basis that the developer’s investigation of the site was not thorough and independent. Opponents of the site point out that three geologists with the U.S. Geological Survey have challenged the assumptions and theoretical models used to analyze the safety of the proposed facility. For example, the geologists believe the potential exists for the contamination of groundwater underlying the Ward Valley site and subsequent transmittal of radioactive materials to the Colorado River—a major source of water for Southern California, Arizona, and part of Mexico. A scientific consultant for the Metropolitan Water District of Southern California said that the long-term potential for contamination of the river is uncertain. Because the Ward Valley site is on federal land, the Secretary of the Interior has decided to postpone further action on transferring the land to the state until the National Academy of Sciences examines these issues and, if necessary, the issues have been examined in an adjudicatory hearing. North Carolina has not completed its examination of the environmental suitability of a proposed site for a disposal facility. The North Carolina developer submitted a report indicating that both of the sites it had studied were suitable for disposal facilities. In October 1993, the developer submitted licensing documents for a site in Wake County, noting that the site meets all applicable laws, regulations, and requirements. According to the developer, even using very conservative estimates of the release of radioactive particles to the environment, the public and the environment are protected and estimated radiation doses are far below the regulatory limits. On December 8, 1993, North Carolina approved the 746-acre Wake County site for further consideration, and the state regulatory authority is reviewing the license application. Waste generators have stored waste temporarily to permit the waste to decay or to consolidate waste for shipment for processing or disposal. With the recent closing of the Barnwell facility to waste generators outside the Southeast Compact, however, waste generators in the 33 states that are not members of the Northwest, Rocky Mountain, and Southeast Compacts have no disposal facilities to accept their wastes until their respective compacts or states have developed new disposal facilities. These generators, who accounted for about 42 percent of all commercially generated low-level waste in 1993, will have to arrange storage for their waste until their respective compacts or states develop new disposal facilities or obtain access to other facilities. In the meantime, waste storage is increasing in numerous locations around the nation, including in heavily populated areas and in industrial parks. For example, in 1993, after Washington, D.C., lost access to a disposal facility, the radiation safety officer at a university’s medical research center in the District said that he converted a portion of the institution’s parking area to a storage area. Some biotechnology firms in an industrial park in San Diego, California, store their waste drums and liquid waste containers in cargo containers, as approved by the California Department of Health. Figures 4.2 and 4.3 show two other examples of storage areas. Figure 4.4 shows the number of on-site storage areas in Ohio. The prospects of long-term storage of increasing quantities of commercially generated low-level waste has raised several environmental and health concerns, particularly for small waste generators. Generally, large generators, such as utilities that operate nuclear power plants, have adequate storage space and technical expertise. Although some alternatives to supplement long-term storage, such as legal disposal into sewage systems or incineration, may be available to some waste generators, little is known about the extent to which these alternatives might relieve the storage burden on generators. In this regard, there is limited information currently available throughout the nation on quantities of waste now in storage, waste generators’ storage capabilities, and the extent to which generators are using alternative waste management techniques. And, neither NRC nor DOE currently have plans to collect such information. NRC has several primary concerns about the potential effects on public health and the environment from waste generators significantly increasing their storage of commercially generated low-level waste. One concern is the potential for releases of radioactive materials in the event of an accident caused by an event such as a fire, hurricane, or tornado. According to an NRC official, no serious accidents related to storage have occurred in the past. Although NRC has not conducted any analyses of the potential consequences of such an event, it believes the risk of potential releases as a result of an event or accident at one of numerous storage sites around the country is higher than the risk of a release from a limited number of disposal sites. Another NRC concern relates to potential degradation of the packages that contain stored waste. Depending on the waste storage environment, degradation of waste packages could occur in several ways, such as temperature fluctuations, corrosion, generation of gases and corrosive substances, and radiation-induced embrittlement of certain containers. (Fig. 4.5 shows an example of corrosion of low-level waste drums at a DOE facility.) Therefore, waste generators need to maintain sufficient integrity of their stored waste packages to prevent dispersal of the waste during storage, transport, and handling. According to NRC, if gone undetected, degradation of packages could lead to spills or releases during handling for disposal, which would create the potential for increased worker exposures during handling, repackaging, and cleanup. NRC officials did not have any examples of such degradation, they said, because extended on-site storage is a relatively new phenomenon. Another of NRC’s concerns is the possibility of increased radiation exposure to workers from storage-related activities. For example, conducting routine radiation surveys and inspecting waste in storage could add to workers’ occupational doses of radiation. Nuclear utilities in Michigan, for example, indicated that technicians may experience greater exposure levels due to the need to store larger quantities of waste. In addition to NRC’s concerns, some generators and state officials said that there could be a greater risk of illegal dumping as the amounts of waste in storage increase and storage capacity becomes saturated. For example, the officials said, NRC’s regulations permit, under certain conditions, users of radioactive materials to dispose of wastes in sewage systems. In the absence of access to disposal facilities, these generators and officials said, waste generators might dispose of waste in sewage systems in excess of the limits that NRC permits. In a 1980 report, we reported that the abrupt closure of disposal facilities in 1979 might have led to some illegal dumping. Another concern among some generators is a possible reduction in nuclear health care and medical research because of a lack of access to disposal sites and storage capabilities. For example, Organizations United for Responsible Low-Level Radioactive Waste Solutions said that hospitals and clinics could be forced to stop nuclear medicine procedures to diagnose heart disease, detect cancer, or cure thyroid disease. In some cases, the organization said that physicians will choose other, less desirable, alternatives, such as ultra-sound, rather than referring a patient to another hospital for a nuclear medicine procedure. The organization also said that medical research on cancer, AIDS, Parkinson’s disease, diabetes, and other illnesses could suffer. In 1993, the organization’s chairman expressed concern that small hospitals where research is conducted could give up their nuclear departments and some therapy and research suppliers could go out of business. NRC’s regulations permit alternatives to alleviate storage or disposal for some commercially generated low-level waste. Small amounts of certain radioactive materials that are readily soluble or dispersible in water, for example, can legally be disposed of in sewer systems. Some generators that had not been using this alternative are now beginning to use it. The radiation safety officer at a hospital in California told us that the hospital began the legal disposal of radionuclides in the sewage system for the first time in 1993. The radiation safety officer for a hospital in Washington, D.C., told us that, when the District lost access to a disposal facility in 1993, he encouraged a variety of efforts for researchers at his institution, including legal sewage disposal. Furthermore, researchers at a medical college in New York have designed a method to dissolve radioactive animal carcasses used in medical research. According to the researchers, using this chemical process results in a solution that can then be disposed of into the sewer within permissible levels of radiation. Others, however, have concerns about the increased disposal in sewage systems. Medical experts at some hospitals told us that they did not believe that disposing of radioactive wastes in sewage systems within legal limits is the best method of disposal. They said that this disposal method provides additional exposure to the public and, although the amounts disposed of are within permissible levels, the resulting exposure to the public is not as low as is reasonably achievable by disposing of wastes in a land disposal facility. Furthermore, we recently reported that nine sewage treatment plants were contaminated by radioactive materials appearing in the sewage sludge, ash, and related by-products that are sometimes used for agricultural and residential purposes, such as lawn and garden fertilizer. Officials at the affected plants said that they had been unaware of the problem and had not tested for it. The full extent of the radioactive contamination at sewage treatment plants across the country is unknown, in part, because NRC has inspected only 15 of the approximately 1,110 NRC licensees that may discharge radioactive material to treatment plants to determine if a concentration problem exists. Furthermore, NRC did not have information on another approximately 2,000 licensees that discharge radioactive materials into sewers because inspections of these licensees are the responsibility of agreement states. Another alternative, treatment by on-site incineration, might be attractive to waste generators for some waste if local opposition were not an issue. Local communities, however, may not always accept incineration facilities, and some generators may be concerned about taking possession of ash that contains radioactive elements from other waste generators. Some generators have used on-site incinerators to reduce their waste, particularly biodegradable waste, such as radioactive animal carcasses. However, because of local opposition, it may be difficult to build new incinerators or continue using existing ones. In 1984, for example, an engineering firm canceled its plan to build a low-level waste incinerator in Pennsylvania because of public opposition, and in 1994 the National Institutes of Health closed an incinerator in Bethesda, Maryland, because of public concern about emissions. Officials of the institutes said that the facility, which was used to burn medical waste, including some radioactive waste, met all permitting requirements, but they considered it more important to address the public’s concerns. The acting radiation safety officer for the institutes said that they are considering using a waste processor’s incinerator in Tennessee. Meanwhile radioactive animal carcasses are stored on-site in freezers. Although the Tennessee incinerator has been used by many waste generators, few have used it for low-level biomedical waste. Some generators of biomedical waste have said that they are concerned that their ash would be commingled inadvertently with that of others and they would receive radionuclides not allowed in their licenses when ash remaining from the burn is returned to them. Another alternative, decay in storage, is available to medical licensees and others under certain conditions, including that candidate radioactive wastes must have radioactive half-lives of less than 65 days and that the waste generators must store the waste for a period of time equal to 10 times the material’s half-life. Several generators told us that this is a common practice; therefore, the extent to which its use could increase is undetermined. Several waste processors said that new technologies for treating waste may be developed if waste processors can find not only technological solutions but also economic incentives to do so. With disposal unavailable to most waste generators, storage of wastes at generators’ facilities is now increasing. However, no information is being collected on on-site storage of low-level waste on a nationwide basis. Although some individual state surveys have been conducted on the storage capacity of the generators, the data are inconsistent and therefore difficult to compare. We identified and reviewed surveys by five states on the storage of low-level radioactive waste. Because most of these surveys were completed between 1992 and 1993, the information in them is somewhat dated. Overall, storage capacity varies significantly. NRC and state officials, as well as generators, agree that nuclear utilities would generally have the most capability to store their waste and small medical research facilities in urban areas would have the least capability. Moreover, neither NRC nor DOE—the two federal agencies that could provide a national perspective on low-level waste issues—currently have plans to collect such data. DOE officials said that the agency lacks the necessary authority. According to NRC officials, that agency has considered both the cost to collect this data and the potential usefulness of the data in its regulatory programs. NRC has concluded, the officials added, that collecting the data would be costly and that the data would be of marginal value. Without some nationwide perspective on the status of on-site storage capacity and the trends, it may be difficult to determine whether federal agencies, like NRC and DOE which have some responsibilities in this area, need to improve measures to protect the public health and environment. For reasons such as limited progress in developing new disposal facilities and related economic and environmental concerns, questions have been raised about the relative effectiveness of the current approach and alternative approaches to managing commercially generated low-level waste. Alternatives include, among other things, providing federal incentives or penalties to states, or making the federal government or the private sector responsible for commercial low-level waste disposal. Alternative approaches, however, should be viewed with caution. Supporters of the current program believe that exploring other approaches could undermine support for the state-compact approach and the progress that many compacts and states have made. Furthermore, other approaches also appear to have similar difficulties that states have encountered such as obtaining political and public acceptance of disposal facilities. States were instrumental in shifting responsibility for disposal of commercially generated low-level waste from private industry to compacts of states, because the states wanted control over the selection of sites for disposal facilities. After 14 years of experience with the state-compact approach, states support continuing the program and believe that they can be successful. Other affected parties, including some waste generators and developers/operators of disposal facilities, agree and offer reasons to continue with this approach. As discussed earlier, by 1978, three of the six disposal facilities operated by private companies had been shut down. That year, President Carter established an interagency group to review the entire U.S. nuclear waste management program. In its March 1979 report, the group recommended, among other things, that either individual states or the federal government identify sites for disposing of commercially generated low-level waste within the framework of a national plan developed by, and agreeable to, federal and state governments. According to the group, states commenting on a draft of its report generally supported development of a national plan; however, some states took a strong position against the federal government selecting sites for disposal facilities within their jurisdictions. Also, other states said that states should retain the right, within the concept of a national plan, to veto the selection of sites for disposal facilities within their jurisdictions. At about the time of the report, selecting sites for new disposal facilities began to be seen as a state, rather than federal, responsibility. The governors of the three states with operating disposal facilities—Nevada, South Carolina, and Washington—testified to this effect before congressional committees. Also, officials in several states said that the political climate in their states might prevent them from acting to solve the problem of disposing of low-level waste; therefore, they said, developing new disposal facilities might only be possible if responsibility for selecting new sites is clearly fixed in law. Other states, however, wanted a federal solution because, in their view, public opinion would probably impede states’ unilateral efforts to establish regional disposal facilities. “Since low-level waste is generated in every state, it is unfair to expect three states to shoulder the sole responsibility for the safe disposal of the entire nation’s waste. Unlike high level waste, the problem is not so technologically complex that it requires the leadership of the federal government to manage it effectively. Because the states are primarily charged with protecting their citizens’ health, safety, and environment, it is appropriate that they assume this responsibility. In addition, the public is more likely to accept siting and other waste management decisions made by state government than by a more remote, less accessible federal agency.” In addition, task forces formed by the National Conference of State Legislatures and the Conservation Foundation agreed with the National Governors’ Association position on state control over siting disposal facilities for commercially generated low-level waste. Finally, the State Planning Council on Radioactive Waste Management, formed by the President to review nuclear waste issues, recommended that every state should be responsible for commercially generated low-level waste and that states should be authorized to enter into interstate compacts. This broad support and the unanimous endorsement of the National Governors’ Association contributed significantly to enactment of the Low-Level Radioactive Waste Policy Act of 1980. Despite what may be viewed as the slow pace of implementation of the 1980 act, as amended, state support for the disposal approach set out in that legislation appears to continue. For example, in October 1993, the Director, Natural Resources Group, National Governors’ Association, said that the majority of states prefer to keep the current approach, because most states will not have to develop new disposal facilities. According to the director, alternatives to the current approach would have to come from the states themselves. The director added that staffs of the National Governors’ Association and governors have raised the issue a few times in recent years; however, they concluded that it would be unwise to reopen the act because they are unsure of what would result. Currently, the National Conference of State Legislatures has a policy statement supporting the act, as amended. Supporters of the state-compact approach maintain that most states have pursued, either in compacts or on their own, development of disposal facilities. The supporters also believe the act is designed to establish equity among states in handling the burden of waste disposal, and they do not see other alternatives that would accomplish this goal. Moreover, the supporters question whether the investments of states, developers, and waste generators would be lost—more than $320 million in the last 14 years—and point out that it would cost more time and effort to begin an alternative disposal approach. Furthermore, merely considering an alternative would, according to the supporters, give reluctant states an opportunity for further delay. Those who support the current approach to disposal of low-level waste also said that more time is needed to show whether the approach can be successful. They point out that the strongest remaining incentive for states to develop disposal facilities—loss of access to existing facilities by waste generators—became effective only recently, after existing disposal facilities closed to waste outside their regions. In addition, states have flexibility for further consolidation of state-compacts, such as the Northwest Compact’s arrangement to accept waste generated within the Rocky Mountain Compact and the recent formation of the proposed Texas-Maine-Vermont Compact. Representatives of some waste generators, states and state-compact organizations, environmental groups, and state and federal regulatory officials have expressed various degrees of dissatisfaction with progress on the development of new facilities for disposing of commercially generated low-level waste. Some of these officials suggested alternative approaches to managing or disposing of wastes; however, none had provided extensive analysis to show that the alternative could be more successful than the current approach. On the basis of our discussions with these parties and our collection and analysis of data related to management and disposal of low-level waste, we identified and analyzed the following general alternative approaches to management and disposal of commercially generated low-level waste: Modifying the state-compact approach by adding penalties and/or incentives to encourage timely development of new disposal facilities. Transferring responsibility for disposing of all or certain categories of low-level waste from states to the federal government. Returning the responsibility for disposing of low-level waste to private industry. Adopting alternatives to land disposal in the United States, such as storing waste; substituting shorter-lived radioactive materials or nonradioactive materials for radioactive materials, or banning the use of radioactive materials; and exporting low-level waste to other countries for disposal or disposing of waste in the oceans. Although some of these alternatives have precedents, each appears to have drawbacks that could limit its effectiveness. Still other representatives and officials advocate studying the management of commercially generated low-level waste and other types of nuclear waste on a comprehensive basis as a first step to determining if changes are needed in existing waste management legislation. In a bipartisan effort in 1994, 12 Senators, 27 Representatives, and numerous environmental groups separately asked the President for an independent, comprehensive review of the nation’s nuclear waste programs, including commercially generated low-level waste. In their letters to the President, the proponents of an independent review asserted that nuclear waste has historically been addressed not on its hazardous nature or length of life, but by other, nonscientific delineations, such as the sources of the waste. The proponents believe that the country’s nuclear waste programs deal with waste issues in a piecemeal fashion, and an integrated program would presumably be safer and more cost-effective. Such a review, they suggest, should examine technical, managerial, and policy issues that make the nuclear waste problem so complex. One alternative approach to achieving the objectives in the low-level waste act is for the federal government to provide states with incentives, such as federal funding, to encourage progress, or to penalize states’ lack of progress by withholding federal funds. Those proponents who suggested federal funds to assist the states, however, did not provide specifics on how such funds would improve states’ programs to develop disposal facilities or how the funds would be made available. Other proponents have suggested financial penalties, such as withholding funds to states that do not make measurable progress in developing new disposal facilities. The Low-Level Radioactive Waste Policy Act, as amended, tried this approach to a limited degree. The act required states to meet a series of milestones that would lead to the development of new disposal facilities by January 1, 1993. If a state did not meet a milestone, penalties included payment by waste generators within the state of a non-refundable surcharge and/or loss of rebates to states from an escrow account, managed by DOE and accrued from waste generators. From 1986 to 1992, DOE collected about $37 million in the escrow account. In 1993, DOE disbursed $26 million to states, including the final payment of $11 million to all but the 5 states without plans for future access to disposal facilities. The remaining $11 million will be returned to waste generators because the states and compacts did not provide any new disposal facilities by the January 1, 1993, deadline. Assuming that states are in strict control of their siting efforts, financial incentives or penalites might have some impact. However, the process of selecting a site and developing a disposal facility is complex, controversial and, therefore, may be beyond a state’s ability to strictly control in all cases. Since the surcharges have ended, the possibility has also been suggested that the federal government could withhold other federal funds, such as transportation funds, if states do not meet predetermined deadlines. Those proponents who have suggested this approach, however, have not addressed questions about equity and the effects that such an approach would have on programs for which the funds are typically provided. Some supporters have suggested making the federal government (probably DOE) responsible for disposing of commercially generated low-level waste. First, there are precedents for the approach—DOE has been given responsibility for disposing of spent fuel from civilian nuclear power plants and the most radioactive class of commercially generated low-level waste. Second, this approach could permit selection nationwide of sites for new disposal facilities having superior geologic and technical qualifications rather than relying on qualified, but not necessarily outstanding, sites within many states. Third, federal sites might create less public opposition if all the waste is concentrated at remote locations. Last, the waste might be disposed of at one or more federal reservations that are already too badly contaminated to restore to unrestricted use. For example, some supporters suggested establishing regional collection and processing centers for low-level waste with disposal of the waste on federal lands that are dedicated to perpetual care, such as portions of DOE’s Nevada Test Site, because of radioactive contamination. This alternative, according to its advocates, would spare uncontaminated public lands. At first glance, federal responsibility for disposing of commercially generated low-level waste may appear attractive because of the existing precedents and the potential for disposing of this waste at already contaminated federal facilities. Indeed, the Nuclear Waste Policy Act of 1982 assigned DOE responsibility for developing one or more geologic repositories for permanent disposal of spent fuel from civilian nuclear power plants and other highly radioactive waste. Moreover, amendments to that act in 1987 directed DOE to investigate one site—Yucca Mountain, Nevada—as a candidate site for a repository. If, after investigating that site, DOE determines that the site is suitable for a repository, it must recommend approval of the site to the President. Thus, in the Nuclear Waste Policy Act, as amended, the Congress directed that the site at Yucca Mountain be investigated for possible use as a site for a repository and established procedures for making a political decision on selecting the site following a technical determination on the suitability of the site. However, establishing a similar method for federal disposal of commercially generated low-level waste may be more difficult for several reasons. First, as recognized by the task force of the National Governors’ Association, disposal of commercially generated low-level waste is not so technologically complex that it requires federal management. Second, states with substantial federal lands have opposed efforts to place waste disposal facilities within their borders. In 1991, 21 western governors said that the west had assumed a large part of the national waste management burden. The governors pointed out that a western state is the host to DOE’s Waste Isolation Pilot Plant, which is a proposed repository for disposal of DOE’s transuranic waste, and the candidate repository site at Yucca Mountain. Also, at the time of the governors’ statement, two of the three existing facilities for disposing of commercially generated low-level waste were located in the west. According to the governors, the west has been asked to shoulder a large part of the national waste burden, because of the region’s geology, rainfall, and settlement patterns, while its environment and natural resources have been the lifeblood of the region. The governors said that the west should not sacrifice its environment to subsidize inadequate waste management practices in other parts of the country. Third, it is unclear that the federal government could be more successful than states in obtaining public acceptance of new waste disposal sites. When states sought responsibility for developing facilities for disposing of low-level waste, they argued that they could meet the needs and concerns of their citizens better than the federal government. States said that federal control over the selection of sites for disposal facilities would be more difficult because of longstanding public distrust of federal nuclear waste activities. More recently, the Task Force on Radioactive Waste Management established by the Secretary of Energy Advisory Board concluded that, despite some progress, there continues to be widespread lack of trust in DOE’s radioactive waste management activities. On a pragmatic level, the task force said that public trust and confidence is generally essential for agencies to effectively carry out their missions. The 1985 amendments to the low-level waste act made DOE responsible for disposing of the most hazardous class of commercially generated low-level waste. Thus, a modified method of placing disposal responsibility in the federal government is to make DOE responsible for disposing of still other, relatively hazardous, classes of low-level waste. The argument for this more modest federal assumption of disposal responsibility is that states might then find it easier to develop facilities for disposing of low-level waste that is relatively less hazardous. For example, the Illinois commission’s decision in 1993 to reject a site for a disposal facility was, in part, based on the commission’s uncertainty over whether the proposed engineered facility would contain the long-lived waste that would have been disposed of in the proposed facility for the period of time—up to 500 years—that it would take for those radioactive materials to decay. On the other hand, federal assumption of responsibility for disposing of more commercially generated low-level waste would require the federal government to find a disposal solution for this waste and would not relieve the states of the need to develop facilities for disposing of the relatively large-volume classes of low-level waste with less concentrated long-lived materials. For several years, the possibility that DOE would treat and dispose of mixed waste—low-level radioactive waste mixed with hazardous materials—has been under consideration. In November 1990, the Low-Level Radioactive Waste Forum requested that DOE explore this possibility. Although DOE has not made a decision on this request, in October 1994, DOE’s Assistant Secretary for Environmental Management said that the agency, in consultation with states, would consider incorporating disposal of commercially generated mixed waste into plans that DOE is preparing for managing mixed wastes located at its nuclear facilities. Another alternative is increasing the private sector’s responsibility for developing and operating disposal facilities similar to the role the private sector had previously. Before the Low-Level Waste Policy Act of 1980 was enacted, the private sector had developed, owned, and operated disposal facilities regulated by the states or NRC. However, environmental problems occurred at some facilities, and states in which some of these facilities were located opposed the use of these facilities by waste generators nationwide. For these and other reasons, states concluded that they could best control their own destinies by forming compacts and assuming responsibility for developing disposal facilities. Private-sector responsibility for developing, owning, and operating disposal facilities for commercially generated low-level waste would be consistent with the role of the private sector in disposing of other waste materials, such as solid and hazardous wastes. Moreover, states’ previous environmental concerns may have been addressed, to some extent, by NRC’s issuance in 1982 of regulations governing development of disposal facilities. Finally, there is a recent precedent for private sector development of a low-level waste disposal facility. In 1988, the state of Utah, which belongs to the Northwest Compact, authorized a private company to develop and operate a disposal facility for certain kinds of high-volume, low-radioactivity low-level waste. The facility has since received licenses and permits required for the disposal of these wastes and operates under a resolution passed by the Northwest Compact. According to NRC officials, this facility does not accept routine operating waste from utilities. Moreover, the officials said that the bulk wastes that the facility does accept will not be accepted at most of the disposal facilities that states are developing. For at least two reasons, however, having the private sector develop and operate disposal facilities does not appear to be a favorable alternative. First, that approach would end states’ ability, provided by the compact approach of the 1980 act, as amended, to restrict access to disposal facilities located within their borders to waste generators within the compact in which the state is a member. Second, finding a site for and developing a disposal facility appears to be at least as difficult as it was before the act was passed. Several other alternatives—from temporary storage to a ban on the commercial uses of radioactive materials—have also been offered. Critics of states’ selections of candidate sites for disposal facilities, for example, have suggested that utilities store the low-level waste generated by operation of nuclear power plants at these plants and that all other low-level waste either be stored at nuclear power plants or some other central storage facility. Several states, including Connecticut, Illinois, Massachusetts, and New York, are considering such approaches, but none have adopted them. There are, however, several potential problems with the storage approach: If medical and academic waste generators must pay for a centralized facility solely for their waste, they may, when possible, opt for less costly treatment or storage alternatives. Finding a central storage site could be as difficult as finding and developing a disposal site and facility if local residents do not perceive that a storage facility poses less risk to them than a disposal facility. Earlier experience with the concept of a central facility for storing spent fuel illustrates this potential problem. In that case, some state, local, and environmental groups opposed DOE’s plans to construct a storage facility because of concerns that the facility could become a facility for permanent storage of the spent fuel. A state with a centralized storage facility might not be able to prevent waste generators in other states from shipping their wastes to the central storage facility. Because some of the stored low-level waste would probably be hazardous for more than 100 years, disposal, rather than temporary storage, would eventually be required. Because of the long half-lives of some radioactive materials that become low-level waste, some critics of current state efforts to develop disposal facilities have suggested that commercial firms substitute shorter-lived materials that can be stored until they decay to a harmless level and/or recycle the longer-lived materials. According to researchers in the medical and biotechnology community, however, the use of shorter-lived materials are not always an option in their research. Representatives of some environmental groups have also recommended a moratorium on the generation of low-level waste until they are assured that the waste will be permanently managed in an environmentally sound manner. Such an approach would require a serious examination of the tradeoffs in reduced risk from nuclear waste compared to the reduced benefits from nuclear materials in society. For example, a moratorium might diminish the ability to conduct biomedical research. Adopting a moratorium would, in effect, require repealing the current policy—established in the Atomic Energy Act of 1954, as amended—of encouraging peaceful uses of atomic energy. Finally, two alternatives that would result in the disposal of low-level waste outside the United States have been suggested. One such approach is shipping waste to another country. NRC has developed a proposed rule on licensing imports and exports of low-level waste for disposal. In commenting on the proposed rule, some state officials said that they are concerned that the rule might encourage waste exports at the expense of new domestic disposal facilities, and others did not see the need for the proposed rule in their states because waste could not move out of their compacts without their approval. Also, current international agreements discourage or prohibit this practice. All nations are required to do their best to ensure that nuclear waste is not exported unless the sending and receiving nation approves and the parties agree it is in their best interests. A return to the earlier practice of dumping low-level waste in the Atlantic and/or Pacific Oceans is also an alternative. However, the Congress, in 1982, essentially banned ocean disposal, except for research purposes and, in November 1993, the United States was among the signatories to an international agreement banning ocean disposal of radioactive waste for at least 25 years.
Pursuant to a congressional request, GAO reviewed state efforts to dispose of the low-level radioactive waste that is generated commercially within their borders. GAO found that: (1) 11 states plan to develop commercially generated low-level waste disposal facilities and the state of Washington plans to continue operating its existing disposal facility; (2) 4 states plan to complete facilities between 1997 and 2002, but the remaining states have yet to develop plans for their disposal facilities; (3) the slow progress of development is due to the controversial nature of nuclear waste disposal; (4) a smaller number of larger new facilities could accommodate the current volume of waste at less cost than a greater number of smaller facilities, but the volume of low-level waste could increase in the near future; (5) although new facilities will be necessary to store the waste in 33 states, the environmental effects of having 11 new facilities are unclear; and (6) shifting disposal responsibility from the states to the federal government could present significant challenges, and could undermine state progress in implementing the existing state approach.
One of the most significant challenges facing the District is to maintain the financial viability of the city. Earlier this year, District officials sounded the alarm that the District faces an imbalance between its long-term expenditure needs for program services and capital investment, and its capacity to generate revenues over the long run. In contrast with a cyclical imbalance caused by temporary economic downturns, the District suggests its imbalance is more fundamental in nature. These officials assert that the District faces a fiscal structural imbalance as the result of several factors, many stemming from the federal government’s presence in the city, the absence of a state to provide funding for the state-like services provided by the District, and restrictions on the District’s tax base. District officials have stated that the factors contributing to a fiscal structural imbalance have existed for years but that their effects had been masked during recent years of national and regional economic growth and increased tax revenues. As shown in figure 1, the District has projected operating budget shortfalls ranging from $67 million to $139 million between anticipated revenues and estimated baseline expenditures for each year during fiscal years 2002 through 2006 if corrections are not made. These projections assume a continuation of current tax policies and service levels into the future, without implementing changes to address the projected fiscal shortfalls. The operating deficit projections in figure 1 include the operating budget only and exclude the capital expenditure budget. Therefore, certain probable expenditures are not included in the above budget estimates, such as public schools’ infrastructure needs, needed repair of public roads, and Washington Metropolitan Area Transit Authority (WMATA) capital needs. District officials have expressed concern that if the fiscal structural imbalance issue is not addressed, it will cripple the city’s efforts to maintain financial viability and require the city to make drastic cuts in its budgets and related services to avoid future deficits. In addition, a March 14, 2002, study commissioned by the Federal City Council (FCC) concluded that the District is on a path leading to budget deficits. The study estimated that without corrective action, the District could face budget deficits of at least $500 million by fiscal year 2005 due to a substantial decrease in revenue growth and unbudgeted spending increases in several key areas. The study cited spending for public schools (including spending for special education), Medicaid, and WMATA as the most significant drivers of the growth in projected expenditure levels. The District’s definition of fiscal structural imbalance is premised on an imbalance between projected expenditures necessary to maintain the current level of services and revenues that will be raised under current tax and other revenue policies. Under the District’s definition, a current services analysis assumes the current level of services and revenue structure as the baseline for concluding whether a fiscal structural imbalance exists. A current services imbalance can develop for a variety of reasons, including expenditures growing more rapidly than expected revenues due to increasing workloads such as number of program recipients, a rapid growth rate in health care costs, or a decline in tax revenues. The District also points to its uniqueness and the fiscal issues stemming from its being the nation’s capital and having the federal presence, as well as its responsibility for services ordinarily provided by state government. Some current services imbalances are cyclical, rather than structural, in that revenues become insufficient to support existing levels of services during periods of economic decline but then return to sufficiency when the economy rebounds. In its August 2001 study, the Center on Budget and Policy Priorities (CBPP) notes that it is extremely difficult to determine the degree to which a fiscal imbalance in any state is structural, rather than cyclical. The CBPP reported that states are currently facing their worst financial crisis in 20 years, and they are responding to their budget shortfalls in a variety of ways. Some are using short-term fixes, such as tapping into rainy day funds or imposing temporary tax increases or spending cuts; others are using long-term fixes, such as imposing permanent tax increases or spending cuts. The revenue shortfalls projected by District officials for fiscal years 2002 through 2006, if accurate, would represent recurring deficits in the District’s current services budget position if corrective action is not taken. These projected shortfalls are premised on the continuation of current budget policy over a long-term period spanning economic cycles. They do not contemplate changes in budget policy, nor do they compare the District’s current budget policy with other jurisdictions. However, District officials also suggest that their current environment constrains their ability to respond to the projected imbalance through spending cuts, tax increases, or borrowing. For example, District officials point to deferred infrastructure improvements in public schools, roads, and utilities as the legacy of the long-term presence of a structural imbalance, low levels of service delivery in some programs, such as public education, and high tax rates in comparison to other states and local jurisdictions. Although District officials have not formally estimated the size of their reported fiscal imbalance, they have cited the following expenditure responsibilities as the primary factors contributing to such an imbalance: the District is not directly compensated for services provided to the federal government such as public works and public safety, which the District values at $240 million annually; the District is responsible for state-like services such as human services, mental health services, Medicaid, and the University of the District of Columbia, which the District values at $487 million annually; and the District estimates that approximately 400,000 out-of-state vehicles travel on city roads per day and do not pay for road repair the District values at $150 million per year. District officials also cite the following factors as contributing to limited revenue-raising capacity: 66 percent of the income earned by employees working in the District cannot be taxed by the District because the employees are nonresidents; 42 percent of the real property (or 27 percent of assessed property value) in the District is owned by the federal government and is thus exempt from taxation; an additional 11 percent of real property (excludes District-owned property, but includes nonprofit organizations and embassies) also is tax exempt; District buildings have congressionally imposed height restrictions that have reduced the population and the economic density; and District tax rates and burdens on households and businesses are high in comparison to Virginia and Maryland and its tax base is limited, thus making it difficult to expand the tax base. The District faces some real constraints on revenue. The District, like all state and local governments, is unable to tax property owned by the federal government. District officials say they face a particular hardship because a larger proportion of their property is owned or specifically exempted by the federal government than is the case with most jurisdictions. The District has stated that, according to its real property tax records, 42 percent of its property is federal property. It is difficult to estimate the net fiscal impact of the presence of the federal government or other tax- exempt entities because of the wide variety of indirect contributions that these entities have on District revenues and the lack of information on the services they use. The presence of tax-exempt entities generates revenues for the District, even though they do not pay income or property taxes directly. For example, these tax-exempt entities attract residents, tourists, and businesses to the District. In addition, employees of the tax-exempt entities and employees of businesses that provide services to these entities pay sales taxes to the District. We have found no comprehensive estimates of these revenue contributions; however, studies of individual tax-exempt entities suggest that the amounts could be significant. Further, given the large portion of the private sector activity in the District that is linked to the presence of the federal government and other tax-exempt entities, it is unclear whether commercial property would fill the void if federally owned property were reduced to the average seen in other cities. In addition to the amount of nontaxable property in the District, the District government, unlike state governments, is prohibited by federal law from taxing the income earned in the District by nonresident individuals. States that have income taxes typically tax the income of nonresidents, although some states have voluntarily entered into reciprocity agreements with neighbor states in which they agree not to tax the incomes of each other’s residents. States that impose income taxes also typically provide tax credits to their residents for income taxes paid to other states. In addition, some cities that have income taxes tax the incomes of commuters who work within their boundaries. These taxes are typically levied at a low flat rate (most of the ones we identified were between 1 and 2 percent) on city-source earnings. Other cities are not authorized to levy commuter taxes by their state governments. However, in cases where cities are not authorized to levy commuter taxes, the state governments are able to compensate, if they so choose, by redistributing some of the state tax revenues collected from residents of suburbs to central cities in the form of grants to the city governments, or in the form of direct state spending within the cities. District officials believe that it is unfair for the federal government to apply a restriction on their income tax base that does not also apply to the 50 states. Another argument that is commonly made in favor of removing this particular restriction on the District’s taxing authority is that it would enable the District government to defray the costs of providing public services, such as road maintenance and fire and police protection, that benefit commuters. A recent study estimated that the average commuter increased total District expenditures by $3,016 per year, of which about $90 was for police and fire protection. Some local economists that we interviewed noted that commuters already contribute to the financing of a portion of these services, even without a tax on their income. One recent study estimates that a typical daily commuter to the District pays about $250 per year in sales and excise taxes, parking taxes, and purchases of lottery tickets. Another study suggests that spending by commuters supports many jobs for District residents who are subject to the city’s income tax. We were unable to find data on the amount of taxes paid directly by commuters, the tax revenues attributable to jobs supported by them, or the amount of money that the District must spend to extend services to them, nor have we assessed the accuracy of the estimates cited above. Consequently, we cannot determine conclusively whether the net fiscal impact of commuters in the absence of a commuter income tax is negative or positive. Regardless of the current net fiscal impact of commuters, the District’s finances clearly would benefit considerably from a tax on nonresidents’ incomes. The ultimate burden of a nonresident income tax for the District would not necessarily be borne by commuters into the District. The distribution of the burden would depend on the nature of the crediting mechanism that would be established under such a tax. For example, if the District’s tax were made fully creditable against the federal income tax liabilities of the commuters, as is proposed in the District of Columbia Fair Federal Compensation Act of 2002, then the federal government would bear the cost and would have to either reduce spending or make up for this revenue loss by other means. However, if the federal income tax credit was not available, and instead the states of Maryland and Virginia allowed their residents to fully credit any tax paid to the District against their state income tax liabilities, then those two states would suffer a revenue loss (relative to the current situation). The two states could respond to a District commuter tax by taxing the income of District residents who work within their jurisdictions or increasing the tax rates on all of their residents. If the District’s tax were not fully creditable against either the federal or state taxes, then the commuters themselves would bear additional tax burden. Although the District’s overall warning that it faces structural challenges in balancing revenues and spending requirements should be taken seriously, the District’s estimates of its spending requirements have serious limitations. The District does absorb certain costs associated with supporting services typically provided at the state level as well as with providing services to the federal government. However, the District’s estimates of its costs to provide services to the federal government and its costs of providing state-like services are not supported with detailed data or analysis. Also, the District’s estimates do not reflect municipal-type services provided directly by the federal government. In addition, the District’s estimates of its fiscal structural imbalance do not include potential cost savings from improving management efficiency. Further, the District has developed its budget estimates based on the current level of services as the baseline going forward. According to District officials, no studies have been done to determine the level of services necessary, and the District continues to struggle to determine the level of services to provide, given the perceived political barriers to achieving structural changes in large programs such as public schools, Medicaid, and human services. Finally, the District has not considered potential savings in its estimates of its fiscal structural imbalance. According to District officials, the District government performs state-like functions that contribute to what it considers a structural imbalance. Although the District has costs associated with certain state-like functions, it is important to note that the District also collects and retains state-like income and sales tax revenues to fund these functions and support the activities of some agencies. The District estimated the cost of state-like functions to be $487 million in fiscal year 2002. However, this estimate is based on very limited analytic support. Broad assumptions were made and the analysis was made based on a review of only one jurisdiction. To arrive at its cost estimate, the District has identified state-like functions in 10 different District agencies for fiscal year 2002. To identify the state- like functions, District officials reviewed the State of Maryland’s fiscal year 2002 operating budget to identify state funding to local governments and compared this information with the District’s fiscal year 2002 operating budget. Based on this review and comparison, District officials identified the following 10 District agencies that provide some state-like functions: Department of Mental Health, Department of Human Services, Child and Family Services Agency, University of the District of Columbia, Department of Motor Vehicles, Office of Tax and Revenue, Department of Insurance and Securities Regulation, Office of Cable and Television Communications, and District of Columbia National Guard. Using the Maryland state budget as a guide, District officials used their judgment to assign a “state allocation ratio” to each function in the 10 identified District agencies. For example, if a function, such as Temporary Assistance to Needy Families, received more than half of its funding from the state, then District officials assigned that function a 100 percent state allocation percentage. If a function received less than half of its funding from the state, the District did not consider it a state-like function and gave it a zero state allocation ratio. District officials considered the Office of Tax and Revenue both a state and local function and assigned it a 50 percent state allocation ratio. Two other District agencies, the Department of Human Services and the Child and Family Services Agency, also had a combination of state and local functions and therefore had a weighted state allocation ratio. District officials acknowledged that the state allocation ratios used to create their cost estimates were primarily based on their own judgment and knowledge of state and local programs. Other than providing a summary of Maryland’s state budget, District officials were unable to provide additional documentation to support these decisions. District officials emphasized that, as with any of the cost estimates the District produced to illustrate what it considers a fiscal structural imbalance, these were only estimates. They cautioned that these estimates should not be added together to represent an aggregate cost resulting in a fiscal structural imbalance. A District official said that these estimates were meant only to illustrate different ways of understanding the structural imbalance issues that face the District. The services identified by the District as being provided to support the federal government’s presence are primarily administered by the District’s public works and public safety and justice agencies and include: police protection for federal employees and for federally sponsored or sanctioned events in the District, fire suppression for federal buildings, emergency medical treatment for federal employees, and snow removal and street repairs on streets used by federal vehicles and by federal workers commuting to work in the District. District officials estimated the services provided to the federal government cost the District up to $240 million annually. However, the District did not have a detailed list of actual services provided to the federal government to support its cost estimate. District officials estimated that 27 percent of the total assessed value of property in the District is owned by the federal government. As such, District officials have estimated that the cost of services provided to support the federal government’s presence in the District is based on 27 percent of the proposed budgets for all of the District’s public works and public safety and justice agencies. However, these budgets include functions, such as the Department of Motor Vehicles, that provide minimal services to the federal government. The District’s cost estimate for services provided to the federal government does not consider the services provided by the federal government to the District or expenditures made by the federal government for its own property, when in fact, many federal agencies and properties provide for their own public safety and security and public works services. The National Park Service, for example, provides an extensive network of historical, educational, and recreational opportunities within the District. The federal government provides upkeep, maintenance, and restoration of facilities including not only well-known national sites such as the National Mall or Ford’s Theatre, but also parks such as those on Capitol Hill, including inner city medians, squares, and traffic circles, as well as other areas that provide urban green space within the city. According to the U.S. Department of Interior’s fiscal year 2003 budget request, operating costs for these parks will be $59 million. Federal law enforcement agencies operating within the District include large forces, such as the U.S. Capitol Police with more than 1,400 officers, and smaller forces, such as the Smithsonian Institution Protective Services with an estimated 600 officers. In addition, the General Services Administration’s Federal Protective Service provides law enforcement services to some federal properties throughout the District. These services include a share of police protection from disruptions by major demonstrations, perimeter security for federal buildings, criminal investigations to reduce crime, and training of security personnel. The District’s estimates of its fiscal structural imbalance are premised on the maintenance of the existing level and costs of services now provided into the future. The District’s estimates did not address potential cost savings that could be achieved by improving management efficiency at the agency level. Reducing expenditures by improving efficiency could reduce any imbalance between the District’s revenues and expenditures without negatively impacting program service delivery to its citizens. For example, the March 2002 McKinsey & Company, Inc. study on the District’s financial position concluded that approximately $110 million to $160 million in annual cost savings could be achieved in health, human services, public safety, transportation, and the District of Columbia Public Schools (DCPS) by fiscal year 2005. If achieved, these potential savings could mitigate a fiscal structural imbalance in the District. However, considerable uncertainty exists about these estimates. Potentially the District could also achieve cost savings by correcting problems that have resulted in disallowed Medicaid costs for the District. The District will not be receiving over $100 million of Medical Assistance Administration cost reimbursements for costs incurred in prior years. These cost reimbursements were disallowed for reasons including failure to file timely claims or provide adequate support for claims submitted. Nonreimbursed costs are paid out of local funds, not federal funds. Another example where potential cost savings could be achieved is the DCPS. In the DCPS’ fiscal year 2001 Comprehensive Annual Financial Reports (CAFRs), District officials reported a $64.5 million deficit in locally appropriated funds. During the fiscal year 2001 audit, the District’s financial statement auditors identified material weaknesses within the DCPS accounting and financial reporting processes, such as the monitoring of expenditures and accounting for Medicaid expenditures related to services provided to special education students. DCPS could become more efficient by improving its internal controls over financial accounting and reporting and reducing the risk of overspending within the DCPS programs. Public education has been a large driver of expenditures in the District, representing $1.1 billion of expenditures in fiscal year 2001. Since 1999, the annual increase in the District’s spending for public education has ranged between 19.4 and 21.9 percent. Clearly, such spending increases are difficult to sustain. On August 5, 1997, the Congress passed the National Capital Revitalization and Self-Government Improvement Act, referred to as the Revitalization Act. The Revitalization Act made substantial changes in the financial relationship between the federal government and the District of Columbia as well as in the management of the District government. The District and several nonprofit public interest organizations have stated that the Revitalization Act, while not fully addressing the District’s fiscal challenges, is an excellent first step in helping the District to move towards long-term financial stability. The Revitalization Act made the following adjustments in the financial relationship between the District and the federal government: eliminated the federal government’s annual federal payment to the shifted to the federal government the financial responsibilities and, in some instances, administrative responsibilities, for the following justice functions in the District: incarceration of sentenced adult felons (the Federal Bureau of Prisons assumed responsibility, and the District’s Lorton Correctional Complex was recently closed); the Superior Court, Appeals Court, and Court System (the Pretrial Services Agency and Public Defender Service functions, and the D.C. Parole Board were abolished); and the District Retirement Program covering judges. Also under the Revitalization Act, the federal government assumed financial and administrative responsibilities for one of the District’s largest fiscal burdens, which it inherited from the federal government as part of the transition to Home Rule in 1973—its unfunded pension liability for vested teachers, police, firefighters, and judges. In 1998, the federal government assumed the accrued pension cost of $3.5 billion that existed at the close of 1997. The District remains responsible for funding benefits for services rendered after June 30, 1997, and continues the plan under substantially the same terms. In addition, the Revitalization Act was part of a larger act— the Balanced Budget Act of 1997—that increased the federal share of District Medicaid payments from 50 to 70 percent. Prior to the Revitalization Act, the District had been receiving a federal payment since the mid-1800s due to the District’s unique relationship with the federal government. The Congress recognized that the District’s ability to raise revenues was affected by a number of legal and practical limitations on its authority—the immunity of federal property from taxation; the building height restriction, which has a limiting effect on commercial property values; the prohibition on the District from passing a law to tax the income of nonresidents; and the restriction on imposing sales taxes on military and diplomatic purchases. Although the Revitalization Act repealed the federal payment to the District of Columbia, it also authorized a federal contribution. The Revitalization Act does not present a formula or methodology for translating the generalized notion of compensating the District for the federal government’s presence into a predictable dollar amount, nor does it require that a contribution be made. The changes to the District’s finances resulting from the Revitalization Act impacted both the District’s revenues and expenditures. The District estimates that the net benefit of the Revitalization Act has ranged from a net positive low of $79.1 million to a high of $203 million per year during the period 1998 through 2002. A detailed breakout of the estimated financial impact of the act on the District’s revenues and expenditures is presented in appendix II. The District’s estimates of its fiscal structural imbalance point to many specific factors but do not constitute a comprehensive assessment of underlying imbalances between its expenditures and revenue capacity. The District has not yet determined whether even under the constraints they assert, it has the capacity to provide a level of services comparable to those provided by other cities with similar needs and costs. The District’s estimates essentially use a current services approach to analyzing its fiscal structural imbalance. Even if the District is able to resolve the measurement and analytical problems discussed in this report, this approach would be limited because it assumes the desirability and continuation of current service levels and tax policies. An alternative approach would measure the existence of a fiscal structural imbalance by comparing the District’s spending and revenue capacity to levels in comparable jurisdictions. This approach assesses the ability of the District to provide at least an average level of services adjusted for its unique demographic profile and costs at an average tax burden. The main advantage of this approach is that the measure of fiscal structural imbalance reflects the underlying social and economic conditions affecting the cost of providing public services as well as the underlying strength of the tax base. For instance, this measure takes into account the specific factors influencing the demand for public services (e.g., a large number of school age children, road infrastructure) and its ability to fund these services with a tax burden on local residents that is comparable to other jurisdictions providing comparable services. Under this framework, the structural position of a jurisdiction is not tied to current service levels, or spending or tax policies. From the perspective of this more comprehensive, comparative approach, a jurisdiction could suffer from a fiscal structural imbalance even if its current budget were balanced—in this case, the imbalance would be reflected in lower services, higher taxes, or deterioration of infrastructure when compared to averages in other communities. On the other hand, a jurisdiction with chronic current deficits may not have a fiscal structural imbalance if its deficits were prompted by spending levels or tax rates out of line with comparable jurisdictions with similar needs. At the present time, however, comprehensive data are not readily available to do such a comparative assessment. Preliminary indications suggest that the District would have to sustain a high level of expenditures compared to other state and local areas to provide an average level of services adjusted for its unique demographic profile and costs. However, when compared to other entities, the city also has among the highest revenue capacity, or ability to raise revenue from its own sources, even accounting for the federally imposed constraints on the city’s revenue-raising authority. The most recent comprehensive comparison that we found uses the Representative Expenditure System (RES) to estimate the relative expenditure needs of states together with their localities, or in the terms used in this report, the benchmarked expenditures of the states and localities. This study indicates that, in 1996, the District’s per capita relative expenditures were higher than those of any state. However, this measure has certain shortcomings that could result in understatements of the District’s relative expenditures. The two most recent cross-state comparisons of revenue capacity indicate that the District’s revenue capacity per capita compares favorably to that of most states. These studies use two fundamentally different measures of revenue capacity, both of which largely take into account the fact that the District is prohibited from taxing the District-source incomes of nonresidents. For 1999, the most recent year for which the Department of the Treasury has estimated the Total Taxable Resources (TTR) of states, the District’s value for this particular measure of revenue capacity exceeded that of every state, except Connecticut. In 1997 and 1998, the District’s value was higher than that of every state. The most recent available study that uses the Representative Tax System (RTS) methodology for estimating revenue capacity indicates that, in 1996, the District’s revenue capacity per capita exceeded that of 46 states. However, results of these studies are imprecise and do not allow for conclusions on whether the District has a structural imbalance. The measures of the benchmarked expenditures and revenue capacity used in these studies are out of date. Moreover, as acknowledged by the author of the referenced study on expenditures, the estimates of the spending needed to realize average levels of service do not reflect certain relevant workload and cost differences across jurisdictions. Ultimately, the revenue capacity and expenditure needs would have to be put together to address whether the District has the revenue capacity to provide for at least average levels of services for its unique workload and costs with an average tax burden. Such a comparative analysis would need to adjust for the fact that the District may not directly compare to any current jurisdiction in the nation, owing to its unique combination of state and city functions and revenues. GAO is currently undertaking such an assessment and will report the results of our study next year. While it has made significant progress over the past several years, the District, similar to many other jurisdictions, continues to face a series of substantial, long-term challenges to its financial viability. Addressing these challenges requires continued dedicated leadership to make the difficult decisions and trade-offs among competing needs and priorities. Presently, insufficient data or analysis exist to discern whether or to what extent the District is, in fact, facing a fiscal structural imbalance. On the revenue side, the District clearly has constraints in its ability to increase its tax base. However, the District’s estimates of its possible fiscal structural imbalance have limitations and did not address the levels or costs of services for its citizens in the long term, whether such services could be supported by its present tax structure or tax base, or cost savings that can be achieved from management efficiencies. The available studies comparing revenue capacity and expenditures across jurisdictions are imprecise and some may not be applicable to the District. As such, the Congress would benefit from more systematic information about the District as it considers proposals for addressing the fiscal structural imbalance that the District is currently asserting exists. A fundamental analysis of the District’s underlying capacity to finance at least an average service level in relation to its needs can help determine if there is a fiscal structural imbalance. Such an analysis would provide a stronger foundation for decision makers at all levels to address the District’s financial condition. We currently have ongoing work in this area and plan to issue a future report with a more comprehensive analysis of the District’s long-term financial condition. Therefore, we are not making any recommendations at this time. In responding to a draft of this report, both the Mayor and the Chief Financial Officer of the District stated their belief that the District faces a fiscal structural imbalance, but agreed that further analysis of the District’s fiscal situation is needed because existing data and analysis are not sufficient to discern the degree to which the District is, in fact, facing a structural imbalance. The District reiterated the general areas it believes are drivers of the reported fiscal imbalance, and, in the District CFO’s response, suggested that the annual imbalance was roughly twice the amount reported earlier this year. However, as we stated in our report, we concluded that insufficient data and analysis exist to substantiate the District’s earlier estimates of its reported structural imbalance. In addition, as stated in our report, the District’s estimates of its costs for providing services to the federal government and state-like services lack detailed support and have limitations. We have work ongoing in this area and plan to issue a future report with a more comprehensive analysis of the District’s long-term financial condition. Our future analysis will consider the extent to which the components of the District CFO’s estimates and other important factors, including those where the District has advantages and disadvantages relative to other jurisdictions, impact the District’s overall fiscal situation. The Mayor and the District’s CFO stated that the District will support our efforts by providing necessary information and assistance. We are sending copies of this report to the Ranking Minority Member of the Subcommittee on the District of Columbia, House Committee on Appropriations, and to other interested congressional committees. We are also sending copies to the Mayor of the District of Columbia, the Chair, DC Council, City Administrator/Deputy Mayor for Operations, Chief Financial Officer, and Inspector General. Copies of this report will also be made available to others upon request. Please contact me at (202) 512-9471 or Patricia Dalton at (202) 512-6737 or by e-mail at franzelj@gao.gov or daltonp@gao.gov if you or your staff have any questions concerning this report. To determine how the District and other jurisdictions define fiscal structural imbalance, including the factors that contribute to the District’s reported imbalance, we interviewed and obtained information about fiscal structural balance and imbalance from officials in various District offices, analyzed reports and information received to define a fiscal structural analyzed the District’s general fund revenue and expenditures in fiscal year 2001 and prior years to identify significant fluctuations and programs that were driving costs. To provide information on the constraints on the District’s revenues, we interviewed officials from the office of the District’s CFO and several local experts on the District’s economy and finances. We also reviewed a number of studies prepared by the District, independent commissions, and other researchers that contained information, evaluations, and estimates relating to these constraints. To provide information on the District’s estimates of its spending requirements, we interviewed District officials and analyzed District budget documents and financial statements. To analyze the services provided by the District to support the federal government, we interviewed District officials and analyzed relevant supporting information, such as budgets and financial plans. We also reviewed relevant information from the General Services Administration and other federal agencies on the costs and the types of services the federal government provides to its own property in the District. To identify and analyze the functions that the District contends are state- like functions, we interviewed District officials and requested and analyzed pertinent supporting information. We also reviewed an April 15, 1997, study by the D.C. Financial Control Board entitled, “Toward A More Equitable Relationship: Structuring the District of Columbia’s State Functions.” This study compared the District’s governmental functions to eight similar cities that were selected based on population size, degree of urbanization, the ratio of employed persons to total population, and other factors. In addition, we interviewed several local experts on the District’s economy and finances to obtain their perspective on the state-like functions performed by the District and the expenditures the District makes related to the federal presence. To address the question of the financial adjustments to the District of Columbia’s finances as a result of the Revitalization Act, we reviewed relevant provisions of the Balanced Budget Act of 1997; relevant provisions of the Revitalization Act; relevant provisions of the District of Columbia Home Rule Act; the District of Columbia Appropriations Acts for fiscal years 1998 analyses of the impact of the Revitalization Act on the District’s budget prepared by the Congressional Research Service; the Operating Budget and Financial Plans of the District of Columbia for fiscal years 1998 through 2002 and the Proposed Operating and Financial Plan for fiscal year 2003; prior GAO reports on District government financial operations; and the Department of the Treasury Accountability Report, Fiscal Years 1998 through 2001. We also met with District officials and obtained their documentation related to their projected net savings from the Revitalization Act. To provide information on the District’s revenue capacity compared to other jurisdictions, we reviewed and summarized studies from the District’s CFO’s Office, the U.S. Department of the Treasury, and the relevant economic literature. We conducted the work used to prepare this report from February to July 2002 in accordance with generally accepted government auditing standards. As stated previously, our work on this matter is ongoing. The Mayor and the CFO of the District of Columbia provided comments on a draft of this report. Those comments are reprinted in appendixes III and IV, respectively, and have been incorporated in the report as appropriate. Tables 1, 2, and 3 present the District’s calculations of the projected net benefits from the Revitalization Act on the District’s budget for fiscal years 1998 through 2002. As shown in table 1, the District estimates that the net benefit of the Revitalization Act has ranged from a net positive low of $79.1 million to a high of $203 million a year during the period 1998 through 2002. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
The District of Columbia has historically faced many challenges due to its unique circumstances and role as the nation's capital. After several years of struggling with financial crises and insolvency in the early 1990s, the District has significantly improved its financial condition by achieving five consecutive balanced budgets, an upgraded bond rating, and unqualified or "clean" opinions on its financial statements. More recently, however, District officials have asserted that the District faces a fiscal structural imbalance as a result of several factors, some stemming from the federal government's presence in the city, the absence of a state to provide funding for the state-like services provided by the District, and restrictions on the District's tax base. The District argues that it faces a fiscal structural imbalance between revenues and its expenditures that undermines its capacity to meet its current responsibilities. In contrast with a cyclical fiscal imbalance caused by temporary economic downturns, the District suggests that its imbalance is longer term and more fundamental and, therefore, structural in nature. The District's estimated measures of fiscal structural imbalance are based on the continuation of current budget policy over a longer term period spanning economic cycles, but do not consider the results of policy alternatives. District officials have cited constraints they face in raising revenues as well as what they assert are unique expenditure responsibilities stemming from the District's position as a federal city that must also provide state-like functions. On the revenue side, unlike state governments, the District is prohibited by federal law from taxing the incomes of nonresidents working in the District. On the spending side, the District officials state that they are uniquely burdened by the responsibilities of a state and by requirements to provide services to the federal establishment. However, the District's estimated costs associated with providing state-like services are not supported by detailed analysis and data, and they are derived from cost allocation formulas largely based on the judgment of District officials. The District received some federal relief through the National Capital and Self-Government Improvement Revitalization Act of 1997, which required the federal government to take over certain services in such areas as criminal justice, transferring their financing from D.C. taxpayers as a whole. In addition, the federal government assumed financial and administrative responsibilities for one of the District's largest fiscal burdens, which it inherited from the federal government as part of the transition to Home Rule in 1973--its unfunded pension liability for vested teachers, police, firefighters, and judges. Also, the federal government's share of the District's Medicaid payments was increased from 50 to 70 percent. Although the District's estimates point to many specific factors, they do not constitute a comprehensive assessment of imbalances between expenditures and revenue capacity. The District has not performed the analysis to determine whether it has the capacity to provide a level of services comparable to those provided by other cities with similar needs and costs. As a practical matter, such an analysis is key to determining the presence of an underlying structural imbalance in the District's finances.
SBA was established in 1953, but its basic mission dates to the 1930s and 1940s when a number of predecessor agencies assisted small businesses affected by the Great Depression and, later, by wartime competition. The first of these, the Reconstruction Finance Corporation, was abolished in the early 1950s; SBA was established by the Small Business Act of 1953, to continue the functions of the previous agencies. By 1954, SBA was making business loans directly to small businesses and guaranteeing loans banks made, making loans directly to victims of disasters, and providing a wide range of technical assistance to small businesses. Today, SBA’s stated purpose is to promote small business development and entrepreneurship through business financing, government contracting, and technical assistance programs. SBA also serves as a small business advocate, working with other federal agencies to, among other things, reduce regulatory burdens on small businesses. Most SBA financial assistance is now provided in the form of guarantees for loans made by private and other institutions, but the agency’s disaster program remains a direct loan program and is available to homeowners and renters that are affected by disasters of any kind; and to all businesses, regardless of their size, to cover physical damages. At the end of fiscal year 2005, SBA had authority for over 4,000 full-time employees and budgetary resources of approximately 1.1 billion. Providing small businesses with access to credit is a major avenue through which SBA strives to fulfill its mission. The 7(a) loan program, which is SBA’s largest business loan program, is intended to serve small business borrowers who cannot obtain credit elsewhere. Because SBA guarantees up to 85 percent of each 7(a) loan made by its lending partners, there is risk to SBA if the loans are not repaid. SBA is to ensure that lenders provide loans to borrowers who are eligible and creditworthy. Therefore, strong oversight of lenders by SBA is needed to ensure that qualified borrowers get 7(a) loans and to protect SBA from financial risk. As of September 30, 2005, SBA’s portfolio of 7(a) loans totaled $43 billion. In administering the 7(a) program, SBA has evolved from making loans directly to depending on lending partners, primarily banks that make SBA guaranteed loans. SBA’s other lending partners are Small Business Lending Companies (SBLC)—privately owned and managed, non-depository lending institutions that are licensed and regulated by SBA and make only 7(a) loans. Unlike SBA’s bank lending partners, SBLCs are not generally regulated by financial institution regulators. Since the mid-1990s, when SBA had virtually no oversight program for its 7(a) guaranteed loan program, the agency has established a program and developed some enhanced monitoring tools. We have conducted four studies of SBA’s oversight efforts since 1998 and made numerous recommendations related to establishing a lender oversight function and improving it. Although we sometimes repeated recommendations in more than one report because SBA had not acted to address them, SBA has now addressed many of the outstanding recommendations and is in the process of addressing others. Prior to December 1997, SBA’s procedures required annual on-site reviews of lenders with more than three outstanding guaranteed loans. But in a June 1998 study, we could not determine from the district offices’ files which lenders met this criterion and should have been reviewed. In the five SBA district offices we visited, we found that about 96 percent of the lenders had not been reviewed in the past 5 years and that some lenders participating in the program for more than 25 years had never been reviewed. When we did our study, SBA was implementing a central review program for its “preferred” lenders (those SBA certifies to make loans without preapproval). The Small Business Programs Improvement Act of 1996 required SBA to review preferred lenders either annually or more frequently. In our 1998 report, we recommended that SBA establish a lender review process for all of its 7(a) lenders, including the SBLCs. In 1999, SBA established OLO and charged it with, among other duties, managing lender reviews, including safety and soundness examinations of SBLCs. In the same year, SBA contracted with the Farm Credit Administration—the safety and soundness regulator of the Farm Credit System—to perform examinations of SBLCs. Numerous deficiencies were identified in those first examinations, but the SBLCs and SBA responded positively to address the recommendations. SBA continues its contracting arrangement with FCA. It was during our 2000 study on oversight of SBLCs that we first recommended that SBA clarify its authority to take enforcement actions, if necessary, against SBLCs, and to seek any statutory authority it might need to do so. We made this recommendation again in 2002 and in 2004 and included a call to clarify procedures for taking actions against preferred lenders as well. We recommended that SBA provide, through regulation, clear policies and procedures for taking enforcement actions against preferred lenders or SBLCs in the event of continued noncompliance with its regulations. During this time, SBA sought appropriate authority from Congress to take enforcement actions against SBLCs similar to those of other regulators of financial institutions, such as cease-and-desist and civil money penalty powers. Congress provided SBA enforcement authority over non-bank lenders in late 2004, and SBA announced related delegations of authority in the Federal Register in April 2005 to clarify responsibilities within the agency. SBA officials have told us that they will issue related regulations in 2006. Our 2002 study focused more broadly on the relatively new OLO and found that the agency had made more progress in developing its lender oversight program. OLO had developed guidance, centralized the lender review processes, and was performing more reviews of its lenders. We did, however, find some shortcomings in the program and made recommendations for improving it. For example: While elements of the oversight program touched on the financial risk posed by preferred lenders, weaknesses limited SBA’s ability to focus on, and respond to, current and future financial risk to its portfolio. Neither the lender review process nor SBA’s off-site monitoring adequately focused on the financial risk lenders posed. The reviews used an automated checklist to focus on lenders’ compliance with SBA’s 7(a) processing, servicing, and liquidation standards. The reviews did not provide adequate assurance that lenders were sufficiently assessing borrowers’ eligibility and creditworthiness. We recommended that SBA incorporate strategies into its review process to adequately measure the financial risk lenders pose to SBA, develop specific criteria to apply to the “credit elsewhere” standard, and perform qualitative assessments of lenders’ performance and lending decisions. By 2004, as I will discuss in a moment, we found that SBA had made progress in its ability to monitor and measure the financial risk lenders pose but had not developed criteria for its credit elsewhere standard. Although SBA had taken a number of steps to develop its lender oversight function, the placement of its OLO within the Office of Capital Access (OCA) did not give OLO the necessary organizational independence it needed to accomplish its goals. OCA has other objectives, including promoting the lending program to appropriate lenders. We recommended that SBA make lender oversight a separate function and establish clear authority and guidance for OLO. SBA has taken several steps to address this recommendation but has not made OLO an independent office. In the 2005 delegations of authority published in the Federal Register, SBA specified that a Lender Oversight Committee (comprised of a majority of senior SBA officials outside of OCA) would have responsibilities for reviewing reports on lender-oversight activities; OLO recommendations for enforcement action; and OLO’s budget, staffing, and operating plans. SBA officials believe that these and other measures will ensure sufficient autonomy and authority for OLO to independently perform its duties. These measures appear to provide the opportunity for more independence for OLO, but we have not evaluated how the measures are actually working. Our most recent review of SBA’s oversight efforts, completed in June 2004, focused on the agency’s risk management needs and its acquisition and use of a new loan monitoring service. Using an assessment of best practices, we determined that SBA would need to base its capabilities for monitoring its loan portfolio and lender partners on a credit risk management program. Largely because SBA relies on lenders to make its guaranteed loans, it needs a loan and lender monitoring capability that will enable it to efficiently and effectively analyze various aspects of its overall portfolio of loans, its individual lenders, and their portfolios. While SBA must determine the level of credit risk it will tolerate, it must do so within the context of its mission and its programs’ structures. Since SBA is a public agency, its mission obligations will drive its credit risk management policies. For example, different loan products in the 7(a) program have different levels of guarantees. These and other differences influence the mix of loans in SBA’s portfolio and, consequently, would impact how SBA manages its credit risk. Such a credit risk management program would likely include a comprehensive infrastructure—including, skilled personnel, strong management information systems, and functioning internal controls related to data quality—along with appropriate methodologies and policies that would ensure compliance with SBA criteria. In 2003, SBA contracted with Dun and Bradstreet for loan monitoring services. These services could enable the agency to conduct the type of monitoring and analyses typical of “best practices” among major lenders, and are recommended by financial institution regulators. The services SBA obtained reflect many best practices, particularly those related to infrastructure and methodology, and can facilitate a new level of sophistication in SBA’s oversight efforts. The services also give SBA a way to measure the financial risk posed by its lending partners, and analyze loan and lending patterns efficiently and effectively. However, SBA did not develop the comprehensive policies it needed to implement the best practices as we recommended. SBA officials have told us that they have taken steps to address this recommendation. For example, the management plan governing the agency’s relationship with Dun and Bradstreet addresses a process for continuous improvement. SBA has also established the Lender Oversight Committee and a Portfolio Analysis Committee to review portfolio performance. SBA officials told us that these committees meet frequently. They also described the type of analyses of the loan portfolio and individual lenders made available for review and discussion by the committees, and provided examples of these analyses. Although these developments could provide the tools for risk management that we envisioned, we have not evaluated them. Since the late 1990s, SBA has taken steps to address other management challenges that affect its ability to manage its business loan program and the technical assistance it provides small businesses. Information technology, human capital, and financial management have posed challenges for SBA, as we have noted in special reports to Congress. SBA has now acquired the ability to monitor its portfolio of business loans through its arrangement with Dun and Bradstreet, as mentioned earlier. SBA took this positive step after an unsuccessful attempt to establish a risk management database as required by the Small Business Programs Improvement Act of 1996. We monitored the agency’s progress as it attempted to meet this challenge on its own. When we reviewed SBA’s plans in 1997, we found that it had not undertaken the essential planning needed to develop the proposed system. We periodically reported on SBA’s progress in planning and developing the loan monitoring system since 1997. From 1998 to 2001, SBA’s estimate for implementing the system grew from $17.3 million to $44.6 million. By 2001, SBA had spent $9.6 million for developmental activities, but had never completed the mandated planning activities or developed a functioning loan monitoring system. In 2001, Congress did not appropriate funds for the loan monitoring system and instead permitted SBA to use reprogrammed funds, provided that SBA notify Congress in advance of SBA’s use of the reprogrammed funds. Congress also directed SBA to develop a project plan to serve as a basis for future funding and oversight of the loan monitoring system. As a result, SBA suspended the loan monitoring system development effort. Of the $32 million appropriated for the loan monitoring system effort, about $14.7 million remained. In 2002, SBA contracted for assistance to identify alternatives and provide recommendations for further developing a loan monitoring system. This effort led to SBA awarding a contract to Dun and Bradstreet in April 2003 to obtain loan monitoring services, including loan and lender monitoring and evaluation; and risk management tools. The contract includes four 1-year options at an average cost of approximately $2 million a year. In 2001 we reported on SBA’s organizational structure and the challenges it presented for SBA to deliver services to small businesses. We reviewed how well SBA’s organization was aligned to achieve its mission. We found a field structure that did not consistently match with SBA’s mission requirement. This was caused by past realignment efforts during the mid- 1990s that changed how SBA performed its functions, but left some aspects of the previous structure in place. Among the other weaknesses we identified were: ineffective lines of communication; confusion over the mission of district offices; and complicated, overlapping organizational relationships. SBA began realigning its organization, operations, and workforce to better serve its small-business customers in the 1990’s. With less responsibility for direct lending and a declining operating budget, SBA streamlined its field structure by downsizing its 10 regional offices, moving the workload to district or headquarters offices, and eliminating most of the regional offices’ role as the intermediate management layer between headquarters and the field. SBA created the Office of Field Operations, largely to represent the field offices in headquarters and to provide guidance and oversight to field office management. In 2002, the agency planned to approach its 5-year transformation efforts in phases, testing a number of initiatives in order to make refinements before implementing the initiatives agencywide. These efforts are ongoing. SBA’s current transformation objectives are to: streamline ODA by realigning offices, employees, and space to better serve disaster victims and leverage use of the new disaster loan processing system; centralize all 7(a) loan processing in two centers to standardize procedures and reduce the workforce required for this program; centralize all 504 loan liquidations in two centers to standardize processing and increase efficiency; centralize disaster loan liquidations in one center to standardize processing and increase efficiency; and transform the regional and district offices by standardizing their size and function. In October 2003, when we reported on SBA’s transformation, SBA was near completion of the first phase of its transformation process. This initial phase aimed to transform the role of the district office to focus on outreach to small businesses about SBA’s products and services, and link these businesses to the appropriate resources, including lenders; and centralize some of its loan functions to improve efficiency and the consistency of its loan approval and liquidation processes. We found that the agency had applied some key practices important to successful organizational change, but had overlooked aspects that emphasize transparency and communication. For example, SBA had top leadership support and a designated transformation-implementation team, but the makeup of the team was not communicated to employees and stakeholders, and the team’s leadership was not always consistent. Also, SBA had developed a transformation plan that contained goals, anticipated results, and an implementation strategy--but the plan was not made public, and employees and stakeholders were not apprised of the details of the plan. Also, certain aspects of the plan were revised, causing further confusion among non-management employees. Further, SBA had developed strategic goals to guide its transformation, but these goals were not linked with measurable performance goals that would demonstrate the success of the agency’s plan to expand the focus of the district offices on marketing and outreach. Based on our findings and the possibility that further progress could be impeded by budget and staff realignment challenges, we recommended that SBA: ensure that implementation leadership is clearly identified to employees finalize its transformation plan and share it with employees and stakeholders; develop performance goals that reflect the strategic goals for transformation, and budget requests that clearly link resource needs to achieving strategic goals; use the new performance management system to define responsibilities; develop a communication strategy that promotes two-way communication; solicit ideas and feedback from employees and the union, and ensure that their concerns were considered. SBA officials have told us of the Administrator’s increased efforts to communicate with staff by holding agencywide meetings with employees, for example. In addition, the agency plans to finalize a transformation plan and share it with employees in June. These actions could address some of the recommendations we made to SBA, but we have not documented or evaluated the efforts. SBA has made good progress towards addressing financial management issues that for several years prevented it from obtaining an unqualified audit opinion on its financial statements. We reported on some of these issues in our January 2003 report on SBA’s loan sales. Specifically, we found that SBA lacked reliable data to determine the overall financial results of its loan sales. Further, because SBA did not analyze the effect of loan sales on its remaining portfolio, we reported that its credit program cost estimates for the budget and financial statements may have contained significant errors. In addition, SBA could not explain unusual account balances related to the disaster loan program, which indicated that the subsidized program was expected to generate a profit. These issues raised concerns about SBA’s ability to properly account for loan sales and to make reasonable estimates of program costs. In response to our findings and several recommendations, SBA conducted an extensive analysis to resolve the issues we identified and implemented a number of corrective actions. For example, SBA developed a new cash- flow model to estimate the costs of its disaster loan program, and implemented standard operating procedures for annually revising the cost estimates for its credit programs. SBA also revised its approach to determine the results of loan sales and found that loans were sold at losses, which was contrary to the original determination that the sales generated gains. These findings prompted SBA to eventually discontinue its loan sales program. We reviewed the improvements made by SBA and reported in April 2005 that the loan accounting issues we previously identified were resolved, and that the new cash-flow model improved its ability to prepare more reliable cost estimates and to determine the results of prior loan sales. However, we recommended additional steps that would improve the long-term reliability of the cost estimates, such as routine testing of the model. According to SBA officials, steps have been taken to address each of our recommendations, including the development of policies and procedures on how to operate and test the model. These improvements helped SBA achieve an unqualified audit opinion on its fiscal year 2005 financial statements, which represents significant progress from prior years. However, for fiscal year 2005 SBA’s auditor continued to note weaknesses in SBA’s overall internal controls. The auditor noted three areas involving internal controls that are considered to be weaknesses. The first area, which the auditor considered to be a significant weakness, related to financial management and reporting controls. Specifically, the auditor found that SBA needed to improve its funds management (i.e., canceling loan amounts not disbursed and closing out grants), its review process for accounting transactions, and its financial statement preparation process. The other two less significant control weaknesses related to SBA’s ODA administrative expenditure controls and agencywide information system controls. While these internal control weaknesses were not severe enough to impact SBA’s audit opinion for fiscal year 2005, it is important for SBA to address them to help ensure that SBA continues to be able to generate reliable financial data. Disaster assistance has been part of SBA since its inception, and SBA’s physical disaster loan program is the only form of assistance not limited to small businesses. Through the ODA, SBA provides low-interest, long-term loans to individuals and businesses to assist them with disaster recovery. Unlike the 7(a) program, the disaster loan program provides loans directly to disaster victims. Businesses can apply for “physical loans” to repair or replace business property to pre-disaster conditions, as well as economic injury disaster loans (EIDLs) to obtain working capital funds to meet their normal operating expenses. The maximum loan amount for both physical business loans and EIDLs is $1.5 million, but SBA was given federal authority and supplemental appropriations to increase the amount for 9/11 disaster loans. Homeowners and renters can also apply for loans to cover their uninsured losses. The maximum amount available for home loans is $200,000, and personal property loans to replace items such as automobiles, clothing, and furniture are available up to $40,000. SBA offers terms of up to 30 years for repayment. According to SBA, although ODA aims to provide loan funds to disaster victims as quickly as possible, its focus is on long-term recovery, and not on emergency relief. Since SBA provides low-interest loans, the agency is required to determine whether each applicant is able to obtain financial assistance at reasonable rates and terms from non-government sources prior to assigning an interest rate. A higher rate applies for physical loan applicants if they are determined to have other credit available, and economic injury loan applicants are ineligible if they have other credit available. Physical business loans--where the applicant has credit available from other sources--are also subject to a maximum 3-year term for repayment. SBA also has standard procedures and requirements for disaster loans, including verification of losses claimed, verification of repayment ability, and collateral to secure loans for economic injury loans over $5,000 or for home loans or physical disaster business loans over $10,000. SBA verifies losses for physical loans and also deducts certain forms of compensation, including insurance recoveries, from the eligible loan amount. Federal Emergency Management Agency (FEMA) is the coordinating agency for presidential disaster declarations, and most disaster victims register with FEMA initially before receiving a referral to SBA. SBA can review FEMA’s information to determine if an applicant has already received federal assistance or insurance proceeds to avoid duplication of benefits. If insurance reimbursement is undetermined at the time of application, SBA can approve a loan for the total replacement cost, but any insurance proceeds must be assigned to SBA to reduce the loan balance. In considering any loan, SBA must have reasonable assurance that the loan can be repaid. To make this determination, SBA examines federal tax returns and income information and reviews credit reports to verify the manner in which an applicant’s obligations, including federal debts, have been met. One of the reasons that SBA may decline a loan application is unsatisfactory history on a federal obligation. The law does not require collateral for disaster loans, but SBA policy establishes collateral requirements in order to balance the agency’s disaster recovery mission with its responsibility as a lender of federal tax dollars. For example, for physical disaster loans over $10,000, applicants are required to provide collateral that will best secure the loan, and multiple loans totaling over $10,000 also require collateral to secure each loan. Real estate is the preferred form of collateral, but SBA will not automatically decline an application if the best available collateral is insufficient in value to secure the loan. Following the terrorist attacks of September 11th, SBA provided approximately $1 billion in loans to businesses and individuals in the federally declared disaster areas and to businesses nationwide that suffered related economic injury. Home and business owners in the federally declared disaster areas received just under half of the disbursed loans; the remainder went to eligible businesses around the country. Congress and SBA made several modifications to the programs in response to complaints from small businesses. For example, the EIDL program was expanded to the entire country and to industries that had not previously been covered, size standards for some eligible business were changed, and loan approval and disbursement were expedited. In 2004, in response to concerns that about half of the loan applications submitted by small businesses were declined or withdrawn, we reviewed a representative sample of these applications and found that SBA had followed the appropriate policies and procedures in making loan decisions. We compared SBA’s loan requirements to those of selected nonprofit agencies in the New York area that provided financial assistance to local small businesses following the disaster. Generally, we found that SBA had loan requirements that were similar to these nonprofits, but the nonprofits’ programs allowed some additional flexibility to address the particular needs of their small business constituents. We also currently have work under way to identify and assess the factors that have affected the SBA’s ability to respond to victims of Hurricane Katrina and the other 2005 Gulf Coast hurricanes in a timely manner. As part of our work, we are evaluating how SBA’s new Disaster Credit Management System, which has been in use since January 2005, affected SBA’s response. As the primary federal lender to disaster victims, including individual homeowners, renters, and businesses, SBA’s ability to process and disburse loans in a timely manner is critical to the recovery of the Gulf Coast region. As of February 25, 2006, SBA faced a backlog of about 103,300 applications in loan processing pending a final decision, and the average time these applications had been in process was about 94 days. During the month of March, SBA continued to process applications. By March 25, 2006, SBA had mailed out more than 1.6 million loan applications, received over 350,000 completed applications, processed more than 290,000 applications, and disbursed about $600 million in disaster loan funds. Although SBA’s current goal is to process loan applications within 7 to 21 days, as of March 25, 2006, SBA faced a backlog of about 55,000 applications in loan processing pending a final decision and the average age of these loan applications was about 88 days. SBA also has more than 43,000 loan applications that have been approved but have not been closed or fully disbursed. As a result, disaster victims in the Gulf Region have not received timely assistance in recovering from this disaster and rebuilding their lives. Based on our preliminary analysis of SBA’s disaster loan origination process, we have identified several factors that have affected SBA’s ability to provide a timely response to Gulf Coast disaster victims. First, the volume of loan applications SBA mailed out and received has far exceeded any previous disaster. Compared with the Florida hurricanes of 2004 or the 1994 Northridge earthquake, the hurricanes that hit the Gulf Coast in 2005 resulted in the issuance of roughly two to three times as many loan applications. Second, although SBA’s new disaster-loan processing system provides opportunities to streamline the loan origination process, initially it experienced numerous outages and slow response times in accessing information. However, we have not yet determined the duration and impact of these outages on processing. SBA officials have attributed many of these problems to a combination of hardware-and telecommunications- capacity limitations as well as the level of service SBA has received from its contractors. Third, SBA’s planning efforts to address a disaster of this magnitude appear to have been inadequate. Although SBA’s disaster planning efforts focused primarily on responding to a disaster the size of the Northridge earthquake, SBA officials said that it initially lacked the critical resources such as office space, staff, phones, computers, and other resources to process loans for this disaster. SBA has participated in disaster simulations on a limited basis only and it is unclear whether previous disaster simulations of category 4 hurricanes hitting the New Orleans area were considered. We are also assessing other factors that have affected SBA’s ability to provide timely loans to disaster victims in the Gulf region including: workforce transformation, the exercise of its regulatory authority to streamline program requirements and delivery to meet the needs of disaster victims, coordination with state and local government agencies, SBA’s efforts to publicize the benefits offered by the disaster loan program, and the limits that exist on the use of disaster loan funds. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions at this time. For further information on this testimony, please contact William B. Shear at (202) 512-8678. Individuals making key contributions to this testimony included Katie Harris, Assistant Director, and Bernice Benta. SBA Disaster Loan Program: Accounting Anomalies Resolved but Additional Steps Would Improve Long-Term Reliability of Cost Estimates. GAO-05-409. Washington, D.C.: April 14, 2005. Small Business Administration: SBA Followed Appropriate Policies and Procedures for September 11 Disaster Loan Applications. GAO-04-885. Washington, D.C.: August 31, 2004. Small Business Administration: New Service for Lender Oversight Reflects Some Best Practices, but Strategy for Use Lags Behind. GAO-04- 610. Washington, D.C.: June 8, 2004. Small Business Administration: Model for 7(a) Program Subsidy Had Reasonable Equations, but Inadequate Documentation Hampered External Reviews. GAO-04-9. Washington, D.C.: March 31, 2004. Small and Disadvantaged Businesses: Most Agency Advocates View Their Roles Similarly. GAO-04-451. Washington, D.C.: March 22, 2004. Small Business Administration: Progress Made, but Transformation Could Benefit from Practices Emphasizing Transparency and Communication. GAO-04-76. Washington, D.C.: October 31, 2003. Small and Disadvantaged Businesses: Some Agencies’ Advocates Do Not Report to the Required Management Level. GAO-03-863. Washington, D.C.: September 4, 2003. Small Business Administration: Observations on the Disaster Loan Program. GAO-03-721T. Washington, D.C.: May 1, 2003. Small Business Administration: Progress Made but Improvements Needed in Lender Oversight. GAO-03-720T. Washington, D.C.: April 30, 2003. Small Business Administration: Response to September 11 Victims and Performance Measures for Disaster Lending. GAO-03-385. Washington, D.C.: January 29, 2003. Small Business Administration: Accounting Anomalies and Limited Operational Data Make Results of Loan Sales Uncertain. GAO-03-87. Washington, D.C.: January 3, 2003. Major Management Challenges and Program Risks: Small Business Administration. GAO-03-116. Washington, D.C.: January 1, 2003. Small Business Administration: Progress Made but Improvements Needed in Lender Oversight. GAO-03-90. Washington, D.C.: December 9, 2002. September 11: Small Business Assistance Provided in Lower Manhattan in Response to the Terrorist Attacks. GAO-03-88. Washington, D.C.: November 1, 2002. Small Business Administration: Workforce Transformation Plan Is Evolving. GAO-02-931T. Washington, D.C.: July 16, 2002. Loan Monitoring System: SBA Needs to Evaluate the Use of Software. GAO-02-188. Washington, D.C.: November 30, 2001. Small Business Administration: Current Structure Presents Challenges for Service Delivery. GAO-02-17. Washington, D.C.: October 26, 2001. Small Business Administration: Actions Needed to Strengthen Small Business Lending Company Oversight. GAO-01-192. Washington, D.C.: November 17, 2000. SBA Loan Monitoring System: Substantial Progress Yet Key Risks and Challenges Remain. GAO/AIMD-00-124. Washington, D.C.: April 25, 2000. Small Business Administration: Planning for Loan Monitoring System Has Many Positive Features but Still Carries Implementation Challenges. GAO/T-AIMD-98-233. Washington, D.C.: July 16, 1998. Small Business Administration: Mandated Planning for Loan Monitoring System Is Not Complete. GAO/AIMD-98-214R. Washington, D.C.: June 30, 1998. Small Business Administration: Few Reviews of Guaranteed Lenders Have Been Conducted. GAO/GGD-98-85. Washington, D.C.: June 11, 1998. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Small Business Administration's (SBA) purpose is to promote small business development and entrepreneurship through business financing, government contracting, and technical assistance programs. SBA's largest business financing program is its 7(a) program, which provides guarantees on loans made by private-sector lenders to small businesses that cannot obtain financing under reasonable terms and conditions from the private sector. In addition, SBA's Office of Disaster Assistance makes direct loans to households to repair or replace damaged homes and personal property and to businesses to help with physical damage and economic losses. This testimony, which is based on a number of reports that GAO issued since 1998, discusses (1) changes in SBA's oversight of the 7(a) business loan program; (2) steps SBA has taken to improve its management of information technology, human capital, and financial reporting for business loans; and (3) SBA's administration of its disaster loan program. Since the mid-1990s, when GAO found that SBA had virtually no oversight program for its 7(a) guaranteed loan program, SBA has, in response to GAO recommendations, established a program and developed some enhanced monitoring tools. The oversight program is led by its Office of Lender Oversight, which was established in 1999. Strong oversight of SBA's lending partners is needed to protect SBA from financial risk and to ensure that qualified borrowers get 7(a) loans. In addition to its bank lending partners, loans are made by Small Business Lending Companies (SBLC)--privately owned and managed, non-depository lending institutions that are licensed and regulated by SBA. Since SBLCs are not subject to safety and soundness oversight by depository institution regulators, SBA has developed such a program under a contract with the Farm Credit Administration. Over the years, SBA has implemented many GAO recommendations for lender oversight and continues to make improvements toward addressing others. Since the late 1990s, SBA has experienced mixed success in addressing other management challenges that affect its ability to manage the 7(a) loan program. With respect to using information technology to monitor loans made by 7(a) lenders, between 1997 and 2002, SBA was unsuccessful in developing its own system to establish a risk management database as required by law. However, SBA awarded a contract in April 2003 to obtain loan monitoring services. Regarding SBA's most recent workforce transformation efforts begun in 2002, GAO found that SBA applied some key practices important to successful organizational change but overlooked aspects that emphasize transparency and communication. SBA has implemented some related GAO recommendations for improvements in those areas. SBA has also made good progress in response to GAO recommendations addressing financial management issues. With respect to SBA's administration of its disaster loan program after the September 11, 2001, terrorist attacks, GAO found that SBA followed appropriate policies and procedures for disaster loan applications in providing approximately $1 billion in loans to businesses and individuals in the disaster areas, and to businesses nationwide that suffered economic injury. GAO's preliminary findings from ongoing evaluations of SBA's response to the 2005 Gulf Coast hurricanes indicate that SBA's workforce and new loan processing system have been overwhelmed by the volume of loan applications. GAO identified three factors that have affected SBA's ability to provide a timely response to the Gulf Coast disaster victims: (1) the volume of loan applications far exceeded any previous disaster; (2) although SBA's new disaster loan processing system provides opportunities to streamline the loan origination process, it initially experienced numerous outages and slow response times in accessing information; and (3) SBA's planning efforts to address a disaster of this magnitude appear to have been inadequate.
The size and cost of federal vehicle fleets have been subjects of concern for many years. In 2002, OMB sent a memorandum to the heads of executive branch agencies directing them to examine the size of their vehicle fleets and report the size, composition, and cost of their fleets as part of their budget submission process. In 2004, we reported that because of a lack of attention to key vehicle fleet management practices, the agencies we reviewed could not ensure their fleets were the right size or composition to meet their missions. Most recently, in May 2011, the President directed each federal agency to determine its optimal fleet inventory—including the number and types of vehicles needed—and to set targets for achieving this inventory by December 31, 2015. Key goals of this process are to eliminate unnecessary vehicles, ensure the cost- effectiveness of maintaining vehicle inventories, and meet alternative fuel vehicle goals. From fiscal years 2002 through 2012, the number of federal civilian and non-tactical military vehicles (excluding postal vehicles) increased 19 percent, from about 364,000 to 450,000 vehicles. Federal agencies use vehicles—specifically non-tactical vehicles such as passenger cars and trucks, and special purpose vehicles (e.g., ambulances and buses)—to carry out their missions. See table 1 for a breakdown of the fleets at the agencies we selected to study. Reported total costs associated with these agencies’ fleets in fiscal year 2012 ranged from $48 million for the Army Corps’ fleet of 8,041 vehicles to $523 million for DHS’ fleet of 50,170 vehicles. Federal agencies are responsible for acquiring, maintaining, and managing their vehicle fleets. They are responsible for deciding the number and type of vehicles they need and how to acquire them, including whether to own or lease them and when to replace them. Four of our selected agencies—USDA, Air Force, DHS, and Interior—own most of the vehicles in their fleets. VA and the Army Corps, lease most of the vehicles in their fleets. Agencies must develop a maintenance program for their owned and commercially leased vehicles. Further, agencies must operate their fleets in a manner that enables them to fulfill their mission and meet various federal requirements and directives that affect their fleets and fleet management. These include various statutes, executive orders, and policy initiatives that direct federal agencies to, among other things, collect and analyze data on the costs of operating their fleets, reduce petroleum consumption, acquire alternative fuel vehicles, and eliminate non-essential vehicles. (See table 2.) In addition, agencies must follow federal vehicle management regulations. GSA plays a key role in helping agencies manage their fleets. GSA’s Office of Government-wide Policy (OGP) promulgates federal vehicle management regulations, issues guidance on federal fleet operations, and provides reports on the federal fleet. Federal regulations on fleet management include requirements regarding agencies’ fleet management information systems, vehicle replacement policy, and vehicle fuel efficiency, among other things. OGP also establishes policies and issues guidance to help agencies manage their fleets effectively and meet federal requirements. Guidance includes bulletins on various aspects of fleet management, including fleet management information systems and methodologies for determining the optimal fleet size for agency fleets. OGP also promotes interagency collaboration through various committees and councils, including the Federal Fleet Policy Council, and has sponsored an annual conference on fleet management. GSA’s OGP is also responsible for reviewing annually the Federal Automotive Statistical Tool (FAST) submissions from agencies. FAST is a web-based reporting tool, jointly sponsored by GSA and the Department of Energy (DOE), for agencies to report data on their fleets, such as number of vehicles, costs, and miles driven. FAST is used to satisfy statutory and regulatory reporting requirements and GSA uses it to produce an annual Federal Fleet Report. GSA’s Fleet and Automotive organization manages vehicle-purchasing and vehicle-leasing programs that offer federal agencies an array of automotive products, including alternative fuel vehicles, sedans, light trucks, buses, and heavy trucks. GSA purchases over 50,000 vehicles annually for federal agencies at prices that, according to GSA, are an average of 17 percent below the manufacturer’s invoice price. Supported by a network of regional Fleet Management Centers, GSA also leases more than 200,000 vehicles to over 75 federal agencies. Federal agencies may also lease from commercial vendors in certain instances. We identified three leading practices for fleet management: 1) maintaining a well-designed fleet management information system (FMIS), 2) analyzing life-cycle costs to inform investment decisions, and 3) optimizing fleet size and composition and found that the selected agencies in our review follow these practices to varying degrees. Most of the selected agencies lack the data needed to support sound fleet decision making and oversight and some of their fleet data systems are not integrated with other key agency systems. None of these agencies are fully analyzing lifecycle costs to make vehicle investment decisions. All of the agencies we examined have carried out an internal process for determining their optimal fleet size and composition and have set targets for achieving these optimal inventories, but most have not provided GSA, which reviews these targets, with clear information on the methods they used for producing them. We identified three leading practices for fleet management. We identified these leading practices based on views provided by fleet management experts in the private sector, local government, and fleet management associations. We also compared these practices with legal requirements and GSA and OMB guidance related to federal fleet management. We found that these leading practices generally align with federal fleet management legal requirements and GSA and OMB recommendations, and, as discussed in the following sections, these recommendations identify specific actions that agencies should complete to adhere to these types of leading practices. In particular, GSA has issued guidance containing recommendations for following all of these leading practices. Finally, we obtained the views of GSA officials responsible for fleet management on these leading practices. Overall, according to the experts and GSA officials we interviewed, these practices provide a foundation for agencies to manage fleet costs while meeting their missions. They emphasized, in particular, that sound data systems provide the basis for the various types of analyses that are needed to make cost-effective investment decisions, such as decisions about whether to own or lease vehicles, and determine appropriate fleet size and composition. See table 3 for a fuller description of these leading practices. All of the experts we interviewed noted the importance of maintaining an FMIS that tracks key data needed to manage the fleet. Additionally, GSA guidance states that a sound FMIS is needed for monitoring and analyzing fleet performance and meeting internal and external reporting requirements. The guidance recommends that agencies’ FMISs capture a range of information and integrate with financial and property management systems to facilitate fleet analyses and reporting. Each of the selected agencies we studied had established an FMIS except for USDA’s Natural Resources Conservation Service. Based on information provided by the selected agencies, most of their FMISs capture the majority of the types of fleet data recommended by GSA but none include all of these types of data. (See table 4.) GSA recommends that agencies’ FMISs include data on fleet costs, vehicle acquisition, utilization, repair and servicing history, accidents, and disposal, among other things. recommended data but store some of it outside of their FMISs. Some of the data stored outside of the FMIS are kept in electronic systems, and in other cases, they are stored in paper file folders. GSA has reported that depending on their missions and structures, agencies may not need all of the recommended types of data in their FMISs, or may benefit from other data that is not explicitly recommended; however, agencies should have the data necessary to support relevant and comprehensive analyses. GAO asked agencies to identify a single system as their FMIS and to report on what data are stored in that system. Fleet data are sometimes collected and stored in systems other than the FMIS, such as financial and property management systems. For information about whether an agency’s FMIS is integrated with its financial and property management systems, see table 5. Interior uses a single, centralized property management system as the department-wide FMIS. The type of data missing most frequently from selected agencies’ FMISs are data on costs associated with their fleets, especially indirect costs. According to GSA’s recommendations, cost data should include direct expenses, such as fuel, repair, and vehicle depreciation, as well as indirect costs attributable to the fleet, such as expenditures associated with personnel. Six of the nine agencies with FMISs that we reviewed reported to us that their FMISs do not capture all direct fleet costs. Most selected agencies keep at least some direct cost data in locations other than in an FMIS, such as in a financial management information system. For example, of the nine agencies with FMISs: five reported that they collect data on vehicle modification costs and accessory equipment, but some of the data are not stored in the FMIS; and three reported that they track some or all of their fuel costs outside of the FMIS. In addition, three officials noted that some of their direct cost data lacked the detail needed to help them track life-cycle costs and make decisions such as when to replace vehicles.Customs and Border Protection told us that certain details on maintenance and repair costs may not be reported due to the limitations of their current fleet payment card, making life-cycle cost analysis difficult to conduct. For example, officials from DHS’s With regard to indirect costs, eight of the nine agencies with FMISs that we reviewed reported to us that their FMISs do not capture all indirect fleet costs, or that the indirect costs cannot be readily discerned from other non-fleet costs. GSA defines indirect costs as any cost that cannot be ascribed to a particular vehicle or class of vehicles. Examples of indirect costs include most personnel costs, office supplies, building rental, and utility costs. While GSA identifies in its guidance the types of indirect costs that agencies should capture in their FMISs, it has not provided agencies with guidance on how to estimate those costs. Indirect costs can be challenging to estimate because they may reflect the salaries of employees who only work on fleet management part-time or buildings that are only used partially for fleet-related purposes. Some agencies have not yet developed an approach for estimating these costs. For example, officials of Interior’s Bureau of Land Management, and VA’s Veteran’s Health Administration told us that they lack a method to attribute a certain percentage of indirect costs, such as of facilities and equipment, to fleet management. In some cases, total personnel and facility costs are stored in the agencies’ FMISs but the costs specifically associated with the agency’s fleet are not readily distinguishable. For example, DHS’s Customs and Border Protection’s FMIS records data on facility costs, but it does not have the capability to separate facility costs associated with fleet management from total facility costs. Out of the nine agencies with FMISs that we reviewed: eight reported that their FMIS does not include data on facility costs, or that facility costs in their FMIS are not readily attributable to fleet management; seven reported that their FMIS does not include data on staffing costs, or that staffing costs in their FMIS are not readily attributable to fleet management; and three reported that their FMIS does not include data on fleet-related equipment costs, including office and shop equipment, and tools. Only one agency, USDA’s Forest Service, reported that it captures all of the indirect fleet-related cost data recommended by GSA and can readily distinguish fleet costs from other indirect costs. For example, Forest Service officials track all fleet–related program management costs—such as personnel, facility, travel, and supply costs—and include them in the Forest Service’s fleet costs. Forest Service officials explained that it is critical that the agency capture all fleet-related costs, because it charges the programs and functions that use the vehicles much like how GSA charges lessees. In addition, of the nine agencies with FMISs we reviewed, one reported that it does not keep all of its data on vehicle utilization in its FMIS and three reported that they do not keep all their data on repairs and servicing in their FMIS. Officials from DHS’s Immigration and Customs Enforcement told us that fleet officials in headquarters have access to limited data on vehicle utilization because some utilization data are gathered by the mission groups that use the vehicles and are not shared with headquarters. A few agencies also reported that limitations of the fleet payment cards they use to record transactions have impaired their ability to gather detailed data on servicing and repairs. Some cards record the cost of maintenance, but do not collect information on the type of maintenance performed. The lack of an FMIS with comprehensive data on fleet-related costs can make monitoring and analysis needed for fleet management challenging. For example, several agency officials said that gathering data on the type of maintenance performed on vehicles through methods such as reconciling receipts or using paper logs can make it difficult for fleet managers to perform timely analyses and guide fleet decisions. As explained in forthcoming sections, key analyses of agencies’ fleets that are essential for sound investment decisions and management of fleet size and composition depend on complete and accurate data, particularly data on costs and utilization. Furthermore, the lack of complete data in agencies’ FMISs can impair the validity of reporting on federal fleets, and could therefore impede the ability of GSA, OMB, and Congress to oversee the performance of these fleets. According to GSA officials, data gaps compromise the reporting of accurate agency fleet costs in the FAST system. In particular, because some agencies do not track all costs associated with their owned or leased vehicles, the expenditures associated with all vehicles may appear lower than they actually are. GSA uses the data that agencies report to produce its annual Federal Fleet Report, which is used to report statistics such as costs per mile for owned and leased vehicles. However, because these reports may not fully reflect the costs of all vehicles, those statistics may be misleading, limiting the usefulness of this report for oversight of federal fleet costs. We have previously reported that integrated systems can promote efficiency. See Information Technology: FDA Needs to Fully Implement Key Management Practices to Lessen Modernization Risks, GAO-12-346 (Washington, D.C.: March 2012) and Organizational Transformation: A Framework for Assessing and Improving Enterprise Architecture Management, GAO-10-846G, (Washington D.C.: August 2010). to repeat the data entry process for the FMIS. Lack of data greatly reduces the usefulness of FMISs to conduct fleet analysis. Some agencies we reviewed are making efforts to upgrade and automate their data collection, which could provide them with additional data recommended by GSA as well as additional detailed information to improve analysis and reporting. Most of the experts we interviewed noted the usefulness of automated data collection to provide timely and accurate information to guide fleet decisions. In addition, if data are entered automatically, fewer personnel hours would be needed to collect, reconcile, and enter data. Several agencies are seeking to adopt fleet payment cards that will provide them with additional data on certain types of financial transactions, which would increase their data on direct fleet costs. For example, officials from USDA’s Natural Resources Conservation Service told us that, in addition to implementing an FMIS, it will obtain vehicle and repair cost data from the new USDA fleet card program by the end of 2013.will help them collect cost data that their previous card did not collect accurately or completely. Similarly, DHS’s Immigration and Customs Enforcement is pursuing the capability of importing fuel and maintenance cost data from fleet payment cards. Four agencies are also exploring GPS systems that are capable of collecting data on utilization, greenhouse gas emissions, and direct costs such as scheduled maintenance and fuel consumption. For example, VA’s Veterans Health Administration is pursuing efforts to use GPS based devices to upload data, including utilization and direct cost data, directly into its FMIS. Officials anticipate that the new fleet card In 2012, GSA recommended that DHS, USDA, Interior, and VA obtain centralized, department-wide FMISs. Various efforts are under way at these departments to address these recommendations, as well as efforts to integrate systems with property and financial management systems. (See table 6.) Interior currently possesses a centralized, department-wide FMIS, which its agencies are either using or plan to use soon. DHS, USDA, and VA do not currently possess such an FMIS, but are exploring new ways to store and share fleet data. For example, USDA is adopting FedFMS, an FMIS developed by GSA for federal agencies, for its agencies, with the exception of the Forest Service.adopt a centralized FMIS and is examining various options for doing so. The Air Force and Army Corps already possess centralized FMISs and are pursuing additional interfaces between systems that will provide additional cost information. As previously discussed, data availability and integration of data systems are key challenges that affect many aspects of fleet management; however, agency officials also identified three additional, broad challenges: multiple and competing energy requirements, the allocation of funding to fleet management activities, and ensuring that fleet managers have adequate expertise in a decentralized environment. The extent to which each agency faces these challenges varies. Nevertheless, these were the most common challenges cited across the agencies we reviewed. Agencies have pursued or are pursuing a variety of strategies to address these challenges, which include the fleet optimization process, leveraging Department of Energy (DOE) tools, using a working capital fund, and providing online training, among other things. Seven agencies we reviewed identified multiple and sometimes competing energy requirements as a challenge to effective fleet management. As described earlier, a defined set of energy requirements and goals governs the federal fleet through statutes, regulations, and executive orders. However, we have previously reported that these statutes and orders were enacted and issued in a piecemeal fashion and represent a fragmented rather than integrated approach to meeting key national goals. We have also noted that, because of these numerous and sometimes conflicting requirements and directives, fleet managers often lack the flexibility and tools to meet various energy goals, such as reducing petroleum consumption, energy consumption and greenhouse gas emissions. For example, agencies may not acquire light-duty or medium duty motor vehicles that are not low-greenhouse gas emitting vehicles, and the May 2011 Presidential memorandum directed that by December 2015, all new light duty vehicles purchased or leased by agencies must be alternative fuel vehicles.that most alternative fuel vehicles that meet their mission needs do not However, VA has reported qualify as low-greenhouse gas vehicles, making it difficult to meet both mandates. According to May 2013 data from the Department of Energy, more than 36 percent of publicly-available compressed natural gas stations are found in just two states—California and Oklahoma. Also, E-85 stations are relatively uncommon. We have recently reported that E85 suppliers are concentrated in a few regions in the country. See United States Postal Service: Strategy Needed to Address Aging Delivery Fleet, GAO-11-386 (Washington, D.C: May 2011). accessing non-commercial fueling sites can pose unique, though not insurmountable, challenges. For example, according to a Forest Service official, accessing alternative fuel located on military installations can be hindered by security concerns and differing payment systems. Negotiations to resolve these concerns can require investments of personnel time and effort, which represents additional cost. The fleet optimization process required by the May 2011 Presidential memo, financial assistance from GSA, and tools provided to agencies by the DOE can help agencies balance competing requirements and determine the best approach for meeting these requirements while minimizing cost. A key goal of the fleet optimization process is to determine what fleet size and composition would best meet the agency’s mission while also adhering to requirements for alternative fuel and fuel- efficient vehicles. To assist agencies with the costs associated with meeting energy requirements, GSA recently announced an initiative that would assist agencies in paying for the increased cost of hybrid vehicles. If agencies choose to consolidate their agency-owned vehicles into the GSA Fleet inventory, GSA will fund the total incremental cost to replace eligible vehicles with new, leased hybrid sedans. DOE also offers a variety of tools to help agencies. For example, the alternative fuel locator on DOE’s website helps agencies determine what kinds of alternative fuels are available in a given area, and allows fleet managers to place alternative fuel vehicles in appropriate locations. In addition, the Army Corps and DHS have used a DOE tool to project which alternative fuel vehicles would be appropriate replacements for some of their current inventories. Some DOE tools can encourage collaboration, which reduces the burden of meeting requirements and advances mutual goals. For example, USDA plans to use DOE’s interactive map of vehicles that were granted waivers to use non-alternative fuels to help identify partners interested in supporting commercial development of alternative fuel infrastructure. Uncertainty regarding the allocation of funds for fleet management activities can make it difficult for fleet managers to operate their fleets cost-effectively. Several agency officials and fleet experts explained that predictable and reliable funding streams better support sound fleet management and planning. For example, several officials explained that when there are unforeseen reductions in acquisition funds that can be used to replace vehicles, fleet managers are more likely to keep vehicles that are older and therefore more prone to mechanical failure. As explained previously, several fleet management experts cautioned that keeping older vehicles can result in larger and more expensive fleets. Some noted that in such cases, more vehicles need to be available since the chance of breakdown is higher. In some cases, fleet funding is allocated only for certain activities and may not be used for options that managers consider to be more cost- effective. For example, fleet officials from VA’s Veterans Health Administration and USDA’s Natural Resources Conservation Service reported that funds are sometimes allocated specifically for leasing or specifically to purchase vehicles. Officials said that the required procurement type is not always the most cost-effective, but they have no choice but to spend the money as directed. In other cases, funds allocated for a specific purpose become depleted, even though additional investment could result in overall savings. For example, Air Force fleet officials reported that as of March 2013, there are approximately 210 underutilized vehicles, including specialized vehicles, which cannot be moved to locations where they are needed because funding to transport vehicles has been exhausted and additional funds have not yet been approved. Officials said that transporting those underutilized vehicles at a cost of approximately $2 million would help the Air Force avoid $20 million in potential acquisition costs for new vehicles. Fleet officials cited two strategies that address challenges associated with the allocation of funds for fleet management: (1) using a working capital and (2) developing clear, data- based analyses on the predicted fundoutcomes of specific funding changes. A fleet management consultant, one of the experts we interviewed, told us that his company has previously recommended working capital funds to help agencies better manage the vehicle replacement cycle. Similarly, a county-level fleet manager we interviewed reported that without its working capital fund, the county might not be able to replace vehicles at the optimal time due to budget constraints. USDA’s Forest Service and Interior’s Bureau of Land Management are the two agencies within the scope of this review that have a working capital fund. Officials from these agencies said that a steady stream of available capital helped them to replace vehicles on schedule and avoid a fleet that needed excessive maintenance.recommended that agencies should operate their fleets using a revolving fund or similar mechanism that allows them to capture all vehicle costs and provides them with the means to replace their vehicles in a timely manner. However, an agency must have statutory authority to establish such a fund. Moreover, even among agencies that possess the legal authority to establish a working capital fund, other hurdles may exist. For example, officials from Interior’s National Park Service stated that while having a working capital fund could be advantageous and Interior possesses the legal authority to establish such a fund in any of its In addition, in 2004 GSA agencies, start-up costs are a substantial barrier, as are issues related to fund administration. The Air Force has programmed mission needs at every installation into an algorithm that allows decision-makers to see the “ripple effects” of specific funding changes. Officials stated that although funding may still be cut, it is done strategically and with the lowest overall impact on mission performance or with full knowledge of the consequences. Similarly, DHS’s Customs and Border Protection has drafted a strategic plan that considers costs, benefits and resource availability to achieve prioritized goals, and an accompanying implementation guide to measure progress. One expert we interviewed suggested that having data and analysis to demonstrate the specific outcomes of funding changes will help to ensure that decisions regarding cuts or re-allocation are made with full knowledge. Officials of four agencies also noted that ensuring that fleet managers consistently possess adequate expertise can be a challenge. The majority of agencies we examined reported that fleet management is a part-time task for some of their managers, which can make it difficult to develop fleet expertise. Officials from these agencies said that fleet managers can be responsible for various tasks beyond their fleet duties, such as property management. Officials from three agencies explained that in rural, remote locations it is not cost-effective to pay for a full-time fleet manager since the fleet is smaller than in more metropolitan areas.They also cautioned that being part-time does not automatically indicate a lack of expertise in fleet management. However, some officials also agreed that when an employee only commits a portion of work hours to fleet management it can be more difficult to manage the fleet, and it can be challenging to consistently train a cadre of part-time workers located in remote areas. Officials from Interior’s Fish and Wildlife Service, for example, said that the challenge of training numerous, part-time employees has become particularly evident as the agency tries to implement Interior’s new FMIS. Similarly, officials from USDA’s Forest Service reported that it is challenging to establish and maintain expertise in an environment where responsibilities are divided among multiple employees in different locations. Moreover, two agencies reported that it is challenging to retain the expertise already possessed. For example, Air Force officials explained that there is high demand for knowledgeable fleet managers in the private sector, and the challenges associated with deployment coupled with pay differences can make it hard to retain skilled fleet managers. Similarly, officials of DHS’s Customs and Border Protection told us that some managers have moved on to another job after they were trained. Agencies use varying approaches to enhance the expertise of their fleet managers. Approaches differ even among agencies within the same department. For example, within DHS, Customs and Border Protection has a specific training program for fleet managers, while Immigration and Customs Enforcement provides fleet management training on an as- needed basis. Officials from each of these agencies expressed that they believed their training policies met their specific needs. Below are a few strategies that various agencies have pursued to address the challenge of developing consistent fleet management expertise: Sending personnel to the annual GSA conference: Officials from several agencies reported that the annual FedFleet conference hosted by GSA was a valuable tool for developing and maintaining expertise. Some found the conference useful for teaching core skills, and others said it provided updates on the latest practices to experienced managers. Online training and tools: The Army Corps provides an online toolbox that contains information on fleet management requirements and internal processes. Similarly, Interior’s Fish and Wildlife Service provides online fleet management training, and National Park Service has established an online fleet management portal that provides a variety of resources, including virtual training. Communication and collaboration strategies to share specialized knowledge: Officials from Interior’s Fish and Wildlife Service reported that regional offices and the program management offices collaborate on decisions involving heavy fleet equipment because the program offices have expertise in that area. Consolidation of fleet functions and expertise: Air Force is in the process of transferring its fleet functions to one office. Air Force officials explained that although the transfer is complex and multi- staged, the consolidation will allow for enhanced fleet management. USDA’s Forest Service is also actively seeking ways to reduce the number of personnel with part-time fleet responsibilities. Forest Service officials explained that they recently conducted a study which indicated that fleet performance could be improved if they reduced fragmentation of personnel, and are currently deciding how to consolidate some fleet duties. In April 2013, USDA’s Natural Resources Conservation Service began piloting a centralized, national fleet management team to consolidate fleet functions and expertise, and minimize collateral fleet management duties. Understanding where agencies might improve fleet management practices can inform ways to achieve fleet savings across agencies and address recent concerns about the size and cost of the federal fleet. Specifically, fleets should be well managed to provide appropriate and reliable transportation at the least cost, while meeting agency missions and achieving petroleum and greenhouse gas reduction goals. Complete data and well-designed FMISs are essential for the management of federal fleets. Most of the agencies that we reviewed lack the complete fleet data, particularly cost data, and some lack the integrated fleet data systems that would facilitate the analyses, such as life-cycle cost analysis, necessary to support sound fleet decision making. The steps these agencies are taking to improve data collection and system integration have the potential to improve their ability to access and analyze data related to fleet management, including their ability to capture and analyze life cycle costs. We are not making a recommendation to these agencies because of the actions they are currently undertaking, although it is too soon to tell if these actions will successfully address the issues we have identified. While these agencies are making some progress in improving their data systems, some have not yet developed an approach for estimating indirect costs that are attributable to fleet management. Calculating indirect costs—such as costs for staff, facilities, and equipment—can be a challenge for some agencies when only a portion of these costs is attributable to fleet management. Current GSA fleet management guidance does not include a methodology to calculate these indirect costs. While developing such a method would only be one part of an agency’s overall efforts to improve agency cost data to inform investment decisions, it is a necessary step. By not fully tracking and analyzing total fleet costs, including such indirect costs, some agencies may not have full cost information with which to analyze life-cycle costs and make cost- effective investment decisions, such as decisions about whether to lease or purchase vehicles, and may not be able to fully monitor and report on fleet costs. Determining the number and types of vehicles truly needed by agencies—based on a thorough analysis of vehicle utilization, mission needs, and alternatives—also holds the potential for cost savings. The agencies in this review have made progress in determining their optimal fleet inventories and have set targets and developed plans for achieving these optimal inventories. However, GSA’s lack of information on the methods agencies used in producing their targets limits its ability to identify and recommend opportunities for improvement, which could perhaps lead to additional fleet cost savings. To help improve fleet management, we recommend that the Administrator of GSA take the following two actions. 1. Develop and publish guidance for agencies on estimating indirect costs attributable to fleet management to help ensure that agencies have complete and accurate cost data. 2. Request that when agencies submit their annual updates on their fleet optimization targets, they provide GSA information and supporting documentation on the methods that they used to produce their targets. We provided a draft of this report to the Acting Administrator of GSA and to the Secretaries of Agriculture, Defense, Homeland Security, Interior, and Veterans Affairs for review and comment. In commenting on this draft, GSA noted that it agreed with our findings and recommendations and that it intends to carry out the recommendations. DHS and the Department of Defense provided comments that included additional information on efforts they are taking to improve fleet management, especially systems for maintaining fleet data. GSA’s, DOD’s, and DHS’s comments are reprinted in appendices III, IV, and V, respectively. In addition, DHS, USDA, and VA provided technical comments, which we incorporated as appropriate. Interior did not have any comments on this report. We are sending copies of this report to interested congressional committees; the Secretaries of Agriculture, Defense, Homeland Security, Interior, and Veterans Affairs; and the Administrator of GSA. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202)512-2834 or Flemings@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. We assessed the extent to which the following agencies use leading practices to manage their fleets, including the size and costs of these fleets: the Departments of Agriculture (USDA), Interior (Interior), Homeland Security (DHS), and Veterans Affairs (VA), and the United States Air Force (Air Force) and the United States Army Corps of Engineers (Army Corps) within the Department of Defense. Collectively, these agencies account for about 46 percent of the roughly 450,000 civilian and non-tactical military vehicles maintained by the federal government (excluding the U.S. Postal Service). When selecting these agencies, we considered both military and civilian agencies with fleets of more than 5,000 vehicles. We looked for variation in fleet characteristics, including: age of passenger vehicles, change in fleet size from 2005- 2011, and change in fleet composition (owned versus leased) from 2005- 2011, to ensure that we selected agencies with a range of fleet characteristics. We eliminated agencies that had been the subject of a fleet-related audit within the past 2 years, with the exception of agencies covered in a recent report that stemmed from the same Congressional request as this review. We considered those agencies to provide continuity of information. To select agencies for our review and to describe changes in selected agencies’ fleet inventories over time, we relied on data in GSA’s Federal Fleet report and on additional data provided by the agencies. We assessed the reliability of this data by reviewing program documentation and quality assurance tests and discussing data elements with GSA and agency staff responsible for these data and found the data sufficiently reliable for these purposes. Within USDA, Interior, DHS, and VA, we selected the subagencies with the largest fleets and selected a sufficient number of subagencies to account for at least two-thirds of each agency’s non-tactical fleet. Consequently, in USDA we reviewed the Natural Resources Conservation Service and Forest Service; in DHS we reviewed U.S. Customs and Border Protection and Immigration and Customs Enforcement; in Interior we reviewed the National Park Service, Fish and Wildlife Service, and Bureau of Land Management; and in VA we reviewed the Veterans Health Administration. Within these four departments, we focused most of our work on these subagencies’ management of their fleets, except in areas where the department level has primary responsibility, such as in reporting to GSA on department- wide optimal fleet inventories. Specifically, except where noted, we focused our work on fleet management practices, such as the collection and analysis of fleet cost data, of these eight subagencies as well as the Air Force and Army Corps. Throughout this report, we refer to these subagencies and their departments as well as to the Air Force and Army Corps as “agencies.” To identify leading fleet management practices, we interviewed recognized fleet management experts from consulting companies and private, local government, and nonprofit entities as well as representatives of fleet management associations. We identified fleet management experts by determining if they met more than one of the following criteria: (1) winner of a fleet management award such as those sponsored by the American Public Works Association, (2) spoke at or organized a relevant fleet management conference such as GSA’s FedFleet conference or the Government Fleet Expo and Conference, (3) served as a previous GAO expert, and (4) recommended by other fleet management experts. We interviewed 9 experts, including 2 representatives of fleet management consulting companies, 3 public sector fleet managers, one private sector fleet manager, one fleet manager from a nonprofit, and representatives of 2 fleet management professional associations. These experts include: Private, Public, and Nonprofit Fleet Managers Two fleet managers for cities with approximately 3,000 and 500 vehicles and pieces of equipment respectively (Portland, Oregon and Troy, Michigan) One fleet manager for a County with 1,800 vehicles (Hillsborough County, Florida) One private sector fleet manager overseeing more than 4,000 vehicles One fleet manager for a nonprofit overseeing more than 12,000 Automotive Fleet and Leasing Association National Association of Fleet Administrators We compared practices these experts identified to legal requirements and GSA and Office of Management and Budget guidance related to fleet management. We also obtained the views of GSA officials on these leading practices. Based on the frequency with which practices were identified, as well as our professional judgment, we synthesized this information into a set of leading practices against which we compared agency practices. These leading practices are: (1) maintaining a well- designed fleet management information system, (2) analyzing life-cycle costs to inform investment decisions, and (3) optimizing fleet size and composition. To determine the extent to which the selected federal agencies use leading practices to manage their fleets, including the size and cost of these fleets, we reviewed agency fleet management policies, procedures, plans, and other documentation on their fleet management practices and conducted interviews with fleet management officials at these agencies. We also developed a structured questionnaire sent to each agency regarding whether or not they have a fleet management information system (FMIS), the types of data that are maintained in their FMIS, whether their FMIS is integrated with property and financial management systems, and the efforts under way or planned to improve system integration and fleet data collection. We used the questionnaire responses, as well as supporting information gathered during interviews, to determine if an agency maintained an FMIS and if so, whether it stored all, some or none of the data elements recommended by GSA in that system. We used the information provided by agencies, such as the process that agencies follow in making decisions about whether to own or lease vehicles, to determine if agencies analyze total life cycle costs for their investment decisions. We also used information provided by agencies to determine how agencies are optimizing their fleet size and composition. As part of our review of fleet size and composition, we examined agencies’ assignments of home-to-work vehicles and large or non-alternative fuel executive vehicles. These assignments are discussed in appendix II. To identify challenges these agencies face in managing their fleets and strategies they use to address these challenges, we interviewed agency fleet managers from our selected agencies. We also obtained the views of GSA officials and fleet experts about these challenges and strategies. We analyzed these interviews to identify the challenges that agency fleet managers identified the most often and the strategies most frequently identified to address these challenges. We conducted this performance audit from July 2012 to July 2013, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The agencies we covered all have procedures in place to determine, on a case-by-case basis, whether the assignment of home-to-work vehicles is justified based on mission. Of these agencies, DHS’s Customs and Border Protection and Immigration and Customs Enforcement and Interior’s National Park Service have the largest number of home-to-work vehicles. (See table below.) These vehicles are generally assigned for purposes of law enforcement duties and field work, such as patrolling the U.S. border and conducting and performing immigration law enforcement activities and field-level audit work. Agencies we reviewed have home-to- work policies in place at the department level, or have established policies at the agency or regional/local level establishing permissible uses of home to work vehicle assignments. For example, the National Park Service property management handbook states that all home-to-work vehicle assignments must be authorized by the Secretary of the Interior, and that monitoring of the use of home-to-work vehicles be monitored at the local level so long as an authorization is in effect. DHS home to work policy also requires a log be maintained to ensure use is for official purposes only, that a detailed analysis of proposed costs associated with home-to-work use be provided, and that these vehicles assignments be certified annually to the DHS Office of the Chief Administrative Officer. The President’s May 2011 memorandum to federal agencies on their fleet management directed, among other things, that agencies post on their respective websites the number of vehicles assigned to agency executives that are larger than a mid-size sedan or do not use alternative fuel. Of the agencies we reviewed, DHS’s Customs and Border Protection and Immigration and Customs Enforcement and USDA’s Natural Resources Conservation Service have the largest number of such vehicles: 49, 20, and 20, respectively. In 2012, GSA noted that DHS as a whole retains a very large executive fleet of 90 luxury sedans and large sport utility vehicles and recommended a reduction in the size of this fleet and in the size of the assigned vehicles. In response, DHS stated that when their executive vehicles are up for replacement, they will closely examine the need for each, and, if still needed for mission purposes, it would consider replacing them with smaller, more fuel-efficient vehicles.Other agencies we reviewed maintain smaller fleets of such executive vehicles that are either large or not alternative fuel vehicles, ranging from zero to 14. Susan A. Fleming, 202-512-2834. In addition to the contact above, Judy Guilliams-Tapia (Assistant Director), Maria Edelstein, Kieran McCarthy, Alison Hoenk, Steve Rabinowitz, Russell Burnett, Tim Guinane, Josh Ormond, Crystal Wesco, and Colin Fallon made key contributions to this report.
Federal agencies (excluding the U.S. Postal Service) spend about $3 billion annually to acquire, operate, and maintain about 450,000 civilian and non-tactical military vehicles. Agencies may lease or buy vehicles from GSA, which also issues requirements and guidance on fleet management. In recent years, Congress and the President have raised concerns about the size and cost of federal agencies' fleets. In 2011, the President directed agencies to determine their optimal fleet inventories and set targets for achieving these inventories by 2015 with the goal of a more cost-effective fleet. GAO was asked to review agency efforts to reduce fleet costs. This report addresses (1) the extent to which selected federal agencies use leading practices to manage their fleets, including their sizes and costs, and (2) any challenges these agencies face in managing their fleets and strategies they use to address these challenges. GAO selected USDA, DHS, Interior, VA, Air Force, and the Army Corps for review based on factors such as fleet size, fleet composition, and changes in fleet size from 2005 to 2011. To identify leading practices, GAO interviewed recognized private sector and government fleet management experts and GSA officials. GAO identified three leading practices for fleet management and found that selected federal agencies--the Departments of Agriculture (USDA), Homeland Security (DHS), the Interior (Interior), and Veterans Affairs (VA); the U.S. Air Force (Air Force); and the Army Corps of Engineers (Army Corps)--follow these practices to varying degrees. These practices are 1) maintaining a well-designed fleet-management information system (FMIS), 2) analyzing life-cycle costs to inform investment decisions, and 3) optimizing fleet size and composition. GAO identified these practices based on views provided by recognized fleet experts and determined that the practices align with legal requirements and General Services Administration (GSA) recommendations. None of the agencies GAO reviewed capture in their FMISs all of the data elements recommended by GSA. The types of data missing most frequently are data on fleet costs, including indirect costs, such as salaries of personnel with fleet-related duties. Also, some of these systems are not integrated with other key agency systems. As a result, fleet managers face challenges in performing analyses that can guide fleet decisions. All of these agencies are making efforts to improve their data and FMISs, but some lack an approach for estimating indirect fleet costs. GSA's guidance does not discuss how to estimate these costs. Most of the selected agencies are not fully analyzing life-cycle costs to make decisions about when to replace vehicles. In addition, although most of the selected agencies use life-cycle cost analyses to decide whether to lease or purchase vehicles, some agencies' analyses do not consider a full set of costs. As a result, agencies may not have full information with which to make vehicle replacement and procurement decisions. Officials mainly cited problems with their cost data and FMISs as contributing factors, and efforts to improve in these areas have the potential to enhance agencies' ability to conduct these types of analyses. In response to the President's 2011 directive and related GSA guidance, the selected agencies have set targets for achieving optimal fleet size and composition. Planned changes in fleet sizes from 2011 to 2015 range from DHS's 15 percent fleet reduction to VA's 8 percent increase. GSA reviewed agencies' initial targets in 2012 and recommended some changes, but lacked supporting documentation to explain how most agencies produced their targets. GSA's lack of information on these methods limits its ability to oversee agencies' fleet optimization efforts and help agencies ensure that their fleets are the right size and composition to meet their missions cost-effectively. In addition to data-related challenges, agency officials identified three broad fleet management challenges: meeting energy requirements, such as requirements for acquiring alternative fuel vehicles; uncertainty regarding the allocation of funding to fleet management activities; and ensuring that fleet managers have adequate expertise. Agencies have pursued or are pursuing a variety of strategies to address these challenges. These include the fleet optimization process, which calls for agencies to determine how best to fulfill requirements for alternative fuel vehicles; using a working capital fund, which provides a steady stream of funding; and providing online training for fleet managers. GAO recommends that the Administrator of GSA 1) develop and publish guidance for agencies on estimating indirect fleet costs and 2) request that agencies provide supporting documentation on their methods for determining their optimal fleet inventories. GSA agreed with the recommendations.
In our testimony, we stated that the results of our undercover tests illustrated flaws in WHD’s responses to wage theft complaints, including delays in investigating complaints, failure to use all available enforcement tools, failure to follow up on employers who agreed to pay, an ineffective complaint intake process, and complaints not recorded in the WHD database. WHD successfully investigated 1 of our 10 fictitious cases, correctly identifying and investigating a business that had multiple complaints filed against it by our fictitious complainants. Our undercover tests revealed that WHD’s complaint intake process is time-consuming and confusing, potentially discouraging complainants from filing a complaint. Of the 115 phone calls we made directly to WHD field offices, 87 (76 percent) went directly to voicemail. While some offices have a policy of screening complainant calls using voicemail, other offices have staff who answer the phone, but may not able to respond to all incoming calls. In one case, WHD failed to respond to seven messages from our fictitious complainant, including four messages left in a single week. In other cases, WHD delayed over 2 weeks in responding to phone calls or failed to return phone calls from one of our fictitious employers. One of our complainants received conflicting information about how to file a complaint from two investigators in the same office, and one investigator provided misinformation about the statute of limitations in minimum wage cases. In one case, a WHD investigator lied to our undercover investigator about confirming the fictitious businesses’ sales volume with the Internal Revenue Service (IRS), and did not investigate our complaint any further. WHD management told us that their investigators do not have access to IRS databases, and WHD does not have the legal authority to obtain information about a business from IRS without the owner’s consent. WHD would be able to check employer- provided information against IRS records if the business owner signed an IRS consent form, however, WHD managers told us that they were unaware of this form and that investigators in the field do not use it. To hear selected audio clips of undercover calls illustrating poor customer service to our fictitious callers, refer to http://www.gao.gov/media/video/gao-09-458t/. Although all of our fictitious complaints alleged violations of laws that WHD enforces, 5 of our 10 complaints were not recorded in WHD’s database. These complaints were filed with four different field offices and included three complaints in which WHD performed no investigative work and two complaints in which WHD failed to record the investigative work performed. According to WHD policies, investigators should enter reasonable complaints into WHD’s database and either handle them immediately as conciliations or refer them to management for possible investigation. However, several of our undercover complaints were not recorded in the database, even after the employee had spoken to an investigator or filed a written complaint. In one of these cases, WHD failed to investigate a child labor complaint alleging that underage children were operating hazardous machinery and working during school hours, and did not record the complaint in its database. The number of complaints that are not entered into WHD’s database is unknown, but this problem is potentially significant. Similar to our 10 fictitious scenarios, in our testimony we identified 20 cases affecting at least 1,160 workers whose employers were inadequately investigated by WHD. We performed data mining on WHD’s database to identify 20 inadequate cases closed during fiscal year 2007. For several of these cases, WHD (1) did not respond to a complainant for over a year, (2) did not verify information provided by the employer, (3) did not fully investigate businesses with repeat violations, and (4) dropped cases because the employer did not return telephone calls. Five of the cases we investigated were closed based on unverified information provided by the employer. In each case, the information could have been verified by a search of public records, such as bankruptcy records, but the case files contain no evidence that the investigators attempted to perform these searches. WHD officials told us that investigators rely on internet searches to collect information about employers and generally do not have access to other publicly available or subscription databases. Examples include: In November 2005, WHD received a complaint alleging that a boarding school in Montana was not paying its employees proper overtime. Over 9 months after the complaint was received, the case was assigned to an investigator and conducted as an over the phone self-audit because, according to the investigator, WHD did not have the resources to conduct an on-site investigation. The employer agreed to pay over $200,000 in back wages to 93 employees, but WHD was subsequently unable to make contact with the business for over 5 months. In June 2007, one week before the 2-year statute of limitations on the entire back wage amount was to expire, the employer agreed to pay $1,000 of the $10,800 in wages due for which the statute of limitations had not yet expired. The investigator refused to accept the $1,000, and WHD recorded the back wages computed as over $10,800 rather than $200,000, greatly understating the true amount owed to employees. WHD determined that the firm had begun paying overtime correctly based on statements made by the employer but did not verify the statements through document review. No further investigative action was taken and the complainant was informed of the outcome of the case. In another case, the complainant alleged that the company employed 15- year-old children, failed to pay its employees minimum wage, and did not properly report income to IRS. The employer claimed that the company did not meet the income requirement to be covered under federal labor law but did not provide documentary evidence. When the employer failed to return WHD’s telephone calls or attend a conference with the investigator, WHD concluded the case. WHD’s complaint intake processes, conciliations, and other investigative tools are ineffective and often prevent WHD from responding to wage theft complaints in a timely and thorough manner, leaving thousands of low wage workers vulnerable to wage theft. As discussed above, our undercover tests showed that some WHD staff deterred callers from filing a complaint by encouraging employees to resolve the issue themselves, directing most calls to voicemail, not returning phone calls to both employees and employers, accepting only written complaints at some offices, and providing conflicting or misleading information about how to file a complaint. We also found that WHD does not have a consistent process for documenting and tracking complaints, resulting in situations where WHD investigators lose track of the complaints they have received. WHD’s conciliation process is ineffective because in many cases, if the employer does not immediately agree to pay, WHD does not investigate complaints further or compel payment. When an employer refuses to pay, investigators may recommend that the case be elevated to a full investigation, but several WHD District Directors and field staff told us WHD lacks the resources to conduct an investigation of every complaint and focuses resources on investigating complaints affecting large numbers of employees or resulting in large dollar amounts of back wage collections. WHD investigators are allowed to close conciliations when the employer denies the allegations, and WHD policy does not require that investigators review employer records in conciliations. In one case study, the employee stated that he thought the business was going bankrupt. As a result, WHD closed the case; however, we used a publicly available online database, Public Access to Court Electronic Records, to determine that the employer had never filed for bankruptcy. WHD management told us that the agency does not provide training on how to use public document searches and investigators do not have access to databases containing this information. In addition, WHD’s poor record-keeping makes WHD appear better at resolving conciliations than it actually is. For example, WHD’s southeast region, which handled 57 percent of conciliations recorded by WHD in fiscal year 2007, has a policy of not recording investigative work performed on unsuccessful conciliations in the database. WHD staff told us that if employers do not agree to pay back wages, cannot be located, or do not answer the telephone, the conciliation work performed will not be recorded in the database, making it appear as though these offices are able to resolve nearly all conciliations successfully. Inflated conciliation success rates are problematic for WHD management, which uses this information to determine the effectiveness of WHD’s investigative efforts. Without information on the outcomes of failed conciliations, WHD cannot identify employers showing a pattern of violations. Finally, we found WHD’s processes for handling investigations and other non-conciliations were frequently ineffective because of significant delays. For example, 5.2 percent of the investigations in our statistical sample were not initiated until over 6 months after the complaint was received, and 6.6 percent took more than one year to complete. See page 26 of appendix I for more information on the methodology of our sample. Timely completion of investigations by WHD is important because the statute of limitations for recovery of wages under the FLSA is 2 years from the date of the employer’s failure to pay the correct wages. FLSA, unlike some other laws, does not permit the suspension of the statute of limitations during a federal investigation. Specifically, this means that every day that WHD delays an investigation, the complainant’s risk of becoming ineligible to collect back wages increases. Labor has not sought additional authority to suspend the statute of limitations during an investigation, yet in several district offices, a large backlog prevents investigators from initiating cases within 6 months. One office we visited has a backlog of 7 to 8 months, while another office has a backlog of 13 months. Additionally, our analysis of WHD’s database shows that one district office did not initiate an investigation of 12 percent of complaints until over one year after the complaint was received, including a child labor complaint affecting over 50 minors. Once complaints were recorded in WHD’s database and assigned as a case to an investigator, they were often adequately investigated. One example of a successful investigation involved a complaint alleging that a firm was not paying proper overtime. The case was assigned to an investigator the same day it was filed in April 2007. The WHD investigator reviewed payroll records to determine that the firm owed the complainant back wages. The case was concluded within 3 months when the investigator obtained a copy of the complainant’s cashed check, proving that he had been paid his gross back wages of $184. In response to our testimony, the Secretary of Labor announced on March 25, 2009, that WHD would hire an additional 250 investigators to “reinvigorate the work of this important agency, which has suffered a loss of experienced personnel over the last several years.” Our work clearly shows that Labor has left thousands of actual victims of wage theft who sought federal government assistance with nowhere to turn. Our work has shown that when WHD adequately investigates and follows through on cases it is often successful. However, far too often many of America’s most vulnerable workers find themselves dealing with an agency concerned about resource limitations, with ineffective processes, without certain tools necessary to perform effective investigations, and unable to address all allegations of wage theft and other labor law violations within the 2-year statute of limitations. While an influx of new staff may help address some of these problems, without a careful assessment of WHD’s workload and processes, unscrupulous employers will continue taking advantage of our country’s low wage workers. Our work documented several cases in which the employees’ right to file a private lawsuit was constrained by WHD’s delays, resulting in hundreds of thousands of dollars of identified wage theft going uncollected. Therefore, Congress may wish to consider authorizing suspension of the statute of limitations while an investigation by WHD is ongoing. We recommend that the Secretary of Labor direct the Administrator of WHD to take the following five actions to improve processes for recording and responding to wage theft complaints: The Administrator should reassess current policies and processes and revise them as appropriate to better ensure that relevant case information is recorded in WHD’s database, including all complaints alleging applicable labor law violations regardless of whether the complaint was substantiated, and all investigative work performed on conciliations, regardless of whether the conciliation was successfully resolved. To provide assurance that WHD personnel interacting with complainants and employers appropriately capture and investigate allegations of labor law violations, and provide appropriate customer service, the Administrator should conduct an assessment of WHD’s complaint intake and resolution processes and revise them as appropriate. To improve the efficiency and effectiveness of WHD personnel handling wage theft complaints, the Administrator should explore providing more automated research tools to WHD personnel that would allow them to identify key information used in investigating complaints such as bankruptcy filings, annual sales estimates for businesses, and information on additional names and locations of businesses and individuals under investigation. To assist in the verification of information provided by employers under investigation, the Administrator should explore gaining access to information maintained by IRS and other agencies as needed through voluntary consent from businesses being investigated. To provide assurance that WHD has adequate human capital and resources available to investigate wage theft complaints, the Administrator should monitor the extent to which new investigators and existing staff are able to handle the volume of wage theft complaints, and if inadequate, what additional resources may be needed. We received written comments on a draft of this report from the Acting Assistant Secretary for Employment Standards (see appendix II). Labor concurred with our recommendations and provided additional clarifying information. Labor noted that unlike investigations, conciliations do not result in any determination of whether a violation occurred, but provide a chance to assist more employees than WHD could otherwise assist through more time-consuming investigations. Labor also stated that staff balance a variety of factors, including office workload, when determining whether to investigate a complaint, refer the employee to another organization or advise the employee of the right to file a private lawsuit. Labor provided additional representations on one of our undercover cases, an anonymous complaint alleging that children were operating heavy machinery and working during school hours in a meat packing plant. Because WHD had no record of this call, we reported that WHD had not investigated the complaint or recorded it in its database. In its written response to this report, Labor stated that our child labor complaint was reviewed by two WHD assistant district directors who determined that the complaint was bogus because the business address was a mailbox store and the company was not listed on several business websites. WHD did not call the business directly. Because no supporting documentation was provided for this representation, we could not confirm WHD’s account of investigative steps taken. See appendix II for more information. Labor also provided us technical corrections to the report which we incorporated, as appropriate. We have reprinted Labor’s written comments in their entirety in appendix II. As agreed with your office, unless you publicly release its contents earlier we plan no further distribution of this report until 30 days from the date of this letter. The report is available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions concerning this report, please contact Gregory D. Kutz at (202) 512-6722 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The missi of the Departmet of Labor’s Wagand Hor Disi (WHD) incldeforcing rovisins of the Fair Labor Sanrd Act, whch is designed to ensure tht mllns of worker re paid the federl minim wagand overtme. Ccting inveigans based o worker comain is WHD’s ory. According to WHD, inveigansange from comrehensive inveigans to coians, whch consis of hoe cll to comainan’s emloer. GAO found tht WHD freqtl respded inadequatel to comain, leing low wage workererable to wage theft. Posing asctus comainan, GAO fled 10 commo comainth WHD distrct offce cross the country. The undercover te reveled uggish respnse tme, oor comaininke rocess, and failed coia ttem, mong other rolems. Ie case, WHD inveigator led abinveigave work erformed and dot inveigate GAO’sctus comain. At the ed of the undercover te, GAO was ll waiing for WHD to gin inveigaing three cas— delay of rl , 4, and 2 moth, respectvely. The table elow rovde ddnal exale of inadequate WHD respns to GAO’sctus comains. The result of oundercover te llustrte fl in WHD’s respns to wage theft comain, incling delays in inveigaing comain, comain ot recorded in the WHD dabase, faire to usll aiable eforcemet tool ecause of ck of rerce, faire to follow up emloer who agreed to pay, an oor comaininke rocess. For exale, WHD failed to inveigate chld labor comainlleging thunderage chldre were oeringrdouschiner and workinging chool hos. I another case, WHD inveigator led to oundercover inveigator abt corming the fctus businsss’ sale volme wth the IternaRevenuServce (IRS), and dot inveigate or comainanyrther. WHD successll inveigated 1 of or 10 fctusas, correctl deying aninveigaing businss tht hd mltiple comainled againsbyr fctus comainans. Fve of or 10 comain were ot recorded in WHD’sabasand 2 of 10 were recorded as successll paid whe inct the fctus comainan reorted to WHD theot ee pai. To heelected auo clips of theundercover cll, o to http://www.ga.gov/media/vdeo/gao-09-48t/. Table 1 rovde summ of the 10 comain tht we fled or ttemted to fle wth WHD. Employee did not receive last paycheck. WHD did not return phone calls and failed to record our complaint in their database. WHD failed to return seven messages from our fictitious employee attempting to file a complaint. In two cases during regular business hours, calls were routed to a voicemail message stating that the office was closed. Complaint was never recorded in the database. Employee did not receive overtime for an average of 4 hours per week for 19 weeks. The WHD office’s large backlog prevented it from investigating our case in a timely manner. Investigator told our fictitious employee that it would take “8 to 10 months” to begin investigating his complaint. WHD failed to return four calls over 4 consecutive months from our fictitious employee attempting to determine the status of his complaint. Complaint was never recorded in the database. Employee was not paid minimum wage. WHD failed to record initial complaint and never returned calls from our fictitious employer. WHD investigator accepted the complaint but did not attempt to contact our fictitious employer to initiate a conciliation. Between September 24, 2008 and January 12, 2009, WHD failed to return four calls from our fictitious employee attempting to determine the status of his complaint. When the fictitious employee reached the same investigator, she had no record of his initial call and suggested the employee look for another job before filing a complaint against his employer. Investigator finally accepted the complaint and left a message for the fictitious employer, but did not return his two subsequent calls. Complaint was never recorded in the database. Employee did not receive last paycheck. WHD inaccurately recorded that our fictitious employee received back wages. Our fictitious employer told the WHD investigator he would pay, but failed to fax proof of payment to WHD as requested. WHD investigator never followed up to confirm payment and closed the case as “agreed to pay.” After 3 weeks, our fictitious employee called back and reported that he hadn’t been paid. The WHD investigator contacted our fictitious employer and, when asked, stated “there is no penalty” for failure to pay. After our fictitious employer refused to pay, WHD informed our fictitious employee of his right to take private action. Complaint was still recorded as “agreed to pay” in WHD’s database despite WHD’s knowledge that the fictitious employer had failed to pay the back wages. Employee was not paid minimum wage. Investigator lied to our fictitious employee about investigative work performed and did not investigate the complaint. Investigator told the fictitious employee that WHD had no jurisdiction because the gross revenues of the fictitious employer did not meet the minimum standard for coverage, even though the fictitious employee stated that his boss had told him the company’s gross revenues were three times greater than the minimum standard. Investigator claimed that he had obtained information on the fictitious employer’s revenue from an IRS database. However, our fictitious employer had never filed taxes, WHD officials told us they do not have access to IRS databases, and the case file shows that no contact was made with the IRS. We referred information related to this case to Labor’s Office of the Inspector General for further investigation. Employee was not paid minimum wage. WHD readily accepted our fictitious employer’s refusal to pay and stated they could not assist the fictitious employee further. WHD investigator accepted this complaint and promptly called our fictitious employer. Our fictitious employer agreed that she had failed to pay the minimum wage but refused to pay back wages due. WHD investigator accepted the refusal without question and informed our fictitious employee of his right to file a lawsuit. When our fictitious employee asked why WHD could not offer more help, the WHD investigator said she was “bound by the laws I’m able to enforce, the money the Congress gives us” and told our fictitious employee to contact his Congressman to request more resources for WHD. We deed numerous rolemth the WHD respnse to oundercover wage theft comains. Ke reas where WHD failed to tke approiate ct inclde delays in inveigaing comain, comain ot recorded in the WHD dabase, faire to usaiable eforcemet tool, faire to follow up emloer who agreed to pay, an oor comaininke rocess. Delay Invetigating Complaint. WHD took more than moth to gin inveigaingve of or fctus comain, incling three tht were ever inveigated. Ie case, the fctus comainanspoke to an inveigator who saihe wold coct the emloer. Ding the t 4 moth, the comainant left for messag asing abt the us of isas. Whe he reched the inveigator, he hd tke ct the comaint, dot recll spingth hand hot etered the comainin the WHD dabas. and chld labor re to or, bu revew of WHD record t the eof or work howed tht the case was ot inveigated or etered into WHD’sabas. I another case, an inveigator spoke to the fctus emloer, who refused to pay the comainant the back wag. The inveigator cloed the coiathot etering the casinform or otcome into WHD’sabas. This is consistet wth the WHD Stheast reginaol of ot recording the inveigave work erformed o unsuccessl coians. The effect of ot recording unsuccessl coians is to mke the coia successte for the reginal offce appetter than ctuall is. The nuer of comain thre ot etered into WHD’sabasis unow, but this rolem is oteiall signiansince t of or 10 gus comain were ot recorded in the dabas. Failure to Ue All Enforcement Tool. According to WHD ff, WHD lck the rerce to usll eforcemet tool in coians where the emloer refus to pay. According to WHD ol, whe an emloer refus to pay, the inveigator may recommed to WHD managemet tht the case elevted to ll inveigan. However, oe of or three fctus emloer who refused to payas ced under inveigan. Ie case, or fctus emloer refused to pay and the inveigator cceted this refusal wthot q, informing the comainant tht he cold fle te lsuit to recover the $262 de to h. Whe the comainanasked wh WHD coldn’rovde hm more assisance, the inveigator reed, “I’ve doe wht I can do, I’ve asked her to pay anhe can’t…I can’t wring lood from toe,” and the suggted the comainant coct his Cngressan to ask for more rerce for WHD to do ther work. According to WHD ol anintervewth ff, WHD doesn’t hve the rerce to coct an inveiga of ever comainanrefer to inveigate comain ffectingnuer of emloee or resulting ine dollmoun of back wags. Oe distrct drector told us tht coians result from “aiske” the part of the emloer and he doe ot lke his inveigator spingme o them. However, whe WHD cannot oain back wag in coia and decde ot to pusuan inveiga, the emloee’s recois to fle te ligan. Low wage workeray unable to fford ttory’s fee or may unlling to gue ther owasin ll clai cort, leing them wth o other ons to oain theback wags. inveigator told the emloer he was required to subroof of paymet, but oe of the inveigator followed up whe the emloer failed to rovde the required roof. The comainanin oth caster cocted the inveigator to reort he hot ee pai. The inveigator ttemted to otiate wth oth fctus emloer, but dot upte the case etr in WHD’sabase to inte tht the comainanever receved back wag, ming appas thougoth cas were successll reolved. Thee two casast doubt o whether comainan whoe coians re mrked “agreed to pay” in the WHD dabasctuall receved theback wags. businss. Or comainanin thisase told the inveigator tht his emloer hsale of $1.5ll in 2007, but the inveigator claimed tht he hd oained inform abt the businss from an IRSabashowing tht the fctus businssot meet the ross revenue threhold for coveragunder federl l. Or fctus businssot fled tax retns and WHD offia told us tht theinveigator do ot hve ccess to IRSabass. A revew of the case fle how thinform from the IRSas revewed by the inveigator. Iform relted to thisase was referred to Labor’s Offce of the Inspector Geerl for frther inveigan. WHD successll inveigated businss tht hd mltiple comainled againsbyr fctus comainans. WHD deed two pate coiansnging against the same fctus businss, oth originaing from comainled byr fctus comainans. Thee coians were combined into an inveiga, the correct rocedre for handling comain ffectingltiple emloees. The inveigator coinued the inveiga fter the fctus emloer claimed tht the businssd fled for bankruptc anttemted to visit the businss whe the emloer topped retning hoe clls. The inveigator dot uspubc record to ver tht the emloer hd fled for bankruptc, but otherwise mde reasnable effort to locte aninveigate the businss. rotectns of the Fair Labor Sanrd Act app to emloeengaged in interte commerce or in the rodct of ood for interte commerce. The ct app to ll emloee of anterise tht has t least $in annuasale or businss and has emloeengaged in interte commerce or in the rodct of ood for interte commerce, or tht has emloeeandling, elling, or otherwise working ood or mteria tht hve ee moved in or rodced for interte commerce by any ern. 29 U.S.C. §. Eve thouganterise mayve pate locns, is considered single eterise for the $00,000 coverage determina f relted ct re erformed througunied oer or commo cotrol by any er or erns for commo businss pu. reted in appix II. Table 2 rovde summ of 10 cas cloed by WHD etwee Octoer 1, 2006 anStemer 31, 2007. Type of alleged violation(s) Minimum Wage and Overtime (FLSA) Two former employees alleged that the firm was not paying minimum wage and overtime to employees. One WHD investigator visited establishment and took surveillance photographs but did not speak with the employer. Almost 2 months later, another WHD investigator visited the establishment and found that the employer had vacated the premises. A realty broker informed WHD that he believed the employer had closed, not relocated, causing WHD to close the case. Using public data, we confirmed that the employer was still active as of January 2009 and made contact with an employee of the firm who told us that the employer had moved from the location WHD visited. Overtime (FLSA) Fort Lauderdale, FL Complainant alleged he was due over $525 in overtime back wages, but commented to WHD that he thought his employer was filing for bankruptcy. WHD dropped the case stating that the employer declared bankruptcy. The employee was informed of his right to file a private lawsuit to recover back wages. WHD received a fax from this employer after the case had been concluded stating that the employee had been paid $245 in per diem, however the documentation did not support that the overtime back wages were paid; no further investigative action was taken. Bankruptcy court records show that the employer had not filed for bankruptcy and we confirmed that the employer was still in business in December 2008. (FLSA) Employee alleged she was owed minimum wage for 145 hours of work. Employer stated that wages were due by the previous owner, but did provide proof to substantiate or return subsequent telephone calls. WHD dropped the case and advised the employee of her right to file private litigation. Type of alleged violation(s) Minimum Wage (FLSA) WHD attempted to contact the employer two times over a period of 2 days to discuss allegations. Case was dropped when no one from the employer, which was a Sheriff’s office, returned WHD’s telephone calls. WHD informed the complainant that private litigation could be filed in order to recover back wages. Minimum Wage and Overtime(FLSA) Employer denied knowing employee and stated that the employee worked for a subcontractor, but refused to provide name of the company. WHD closed the case, recorded that the employer was in compliance with labor laws, and informed the individual who filed the complaint on behalf of the employee of his right to file a civil lawsuit. Employee filed a civil suit, during which the employer agreed he owed back wages. The court ruled that the employee was due $1,500, the same amount cited in the original complaint to WHD. Construction/ Anonymous Child Labor/ Minimum Wage (FLSA) The complainant alleged that the company employed 15 year old children, failed to pay its employees minimum wage, and did not properly report income to the Internal Revenue Service. The employer alleged that the company did not meet the income requirement to be covered under federal labor law, but did not provide documentary evidence. The employer failed to return WHD’s telephone calls or attend the site of the initial conference. WHD concluded this case with no further investigative action. Type of alleged violation(s) Minimum Wage and Overtime (FLSA) WHD attempted to set up a meeting with the company, but it was postponed so the owner could go deer hunting. Subsequent calls from WHD were not answered. Almost 8 months later, WHD conducted an announced site visit and closed the case, citing that the employer appeared to be out of business because no employees were on site during the visit and phone calls were unanswered. Public records show that the employer later signed and submitted an annual statement 2 months after the case was closed and we successfully contacted the employer in November 2008, who confirmed they were located at the same address visited by WHD. Boarding School / Teen Counselor (FLSA) Investigator assigned to case over 9 months after complaint was received. Complaint handled as a self audit, allowing the employer to review its own records for the alleged violations. WHD determined that the employer had begun paying correct overtime based on the employer’s verbal statements; no updated records were reviewed. The employer found that it owed over $200,000 to 93 employees, but delayed until the statute of limitations had almost expired before offering to pay a total of only $1,000 in back wages. WHD did not accept this amount, closed the case, and informed the complainant of the outcome. Type of alleged violation(s) Overtime (FLSA) Employer refused to comply with the law throughout WHD’s investigation and took months to produce payroll records. WHD determined that over $66,000 in back wages was due to 21 employees and stated in the case file that this estimate was “probably low.” The employer generally agreed with WHD’s findings and agreed to pay back wages, but then later refused to respond to WHD or change payroll practices. Over one year after the employer’s agreement to pay, WHD decided not to pursue litigation in part, because the case was considered “significantly old.” Employees were notified of their right to file private litigation in order to recover back wages. Child Labor/Minimum Wage/Overtime (FLSA) Case assigned to an investigator over 22 months after the complaint was received. WHD determined that the restaurant and related enterprises owed approximately $230,000 to 438 employees for minimum wage and overtime violations, and for depositing a percentage of employee tips into a business account. Employer agreed to pay back wages for minimum wage and overtime violations, but did not agree to pay back the collected tips. WHD did not accept partial back wage offer and closed the case with no collection of back wages. tht he dot eleve the frm hd relocted. A result, WHD cloed the inveigan. Using pubcl aiable inform, we found tht the businssas ctve as of Januaand locted ffereddress approxitel 3 mle ay from old locn. We cocted the fctor anspoke wth an emloee, who told us tht the businssd moved from the ddress WHD visited. CasS: I J 2007, WHD receved comaint from former correctns offcer who lleed th coun Sherff’s offce dot pay $766 ininim wag. The WHD inveigator assigned to work o thisase mde two cll to the Sherff’s offce over erod of 2 days. Two days fter the ecod cll, WHD dropped thisasecauso oe from the emloer hd reted the clls. WHD dot mke ddnal effort to coct the emloer or vte the llegans. WHD informed the comainant thte liga cold e fled in order to recover back wags. We successll cocted the Sherff’s offce in Novemer 2008. CasS 5: I May 2007, -roft communi worker ceter cocted WHD o ehlf of ayaborer lleging tht his emloer owed hm $1,00 for the revus three pay erods. WHD cocted the emloer, who ted tht the comainant was ctuall an emloee of subcotrctor, but refused to rovde the name of the subcotrctor. WHD cloed the case wthot verying the emloer’s teme aninformed the communi worker ceter of the emloee’sight to fle te ligan. WHD’sase fle inte tho volns were founand the emloer was in comiance wth appable labor ls. According to the Eecve Drector of the worker ceter, approxitel 2 weekter, WHD cocted hand claimed tht the emloer in the comaint hagreed to pay the back wags. Whe the emloer dot pay, the comainansued the emloer in ll clai cort. Ding the coe of the lsuit the emloer dmtted tht he owed the emloee back wags. The cort rled tht the emloer owed the emloee $1,00 for unpaid wag, the same mounin the original comaint to WHD. llow the emloer under inveiga to coct ther ow revew of record and clcte the back wage to emloees. overtme back wag for ho worked etwee Stemer 2004 and June 2005. WHD determined tht the frm gan paying overtme correctl in June 2006 based o temede by the emloer, but dot ver the teme through docmet revew. After the emloer’s ttor iniiall inted tht the wold agree to pay the over $200,000 in back wag, WHD was unable to mke coct wth the businss for over moths. WHD record inte tht the inveigator eleved tht the frm was trying to fin loo hole to vopaying back wags. I June 2007, oe week efore the 2-te of lns the ere back wagmount was to expire, the emloer agreed to pay $1,000 ot of the $10,800 tht hot et expired. The inveigator refused to ccet the $1,000 saying tht wold hve ee “ke ettling the cas.” WHD recorded the back wag computed as over $10,800 rther than $200,000, retl undering the trmount owed to emloees. WHD oted in the case fle tht the frm refused to pay the more than $10,800 in back wag, but dot recommeassssing nalt ecause the felt the frm was ot ret offeder and there were o chld labor volns. No frther inveigave ctaske and the comainant was informed of the otcome of the cas. CasS: I June 2003 and erl, WHD receved comain against two reauan owed by the same eteris. Oe comainlleed tht emloee were working “off the clock” anerver were ing forced to give 2. ercet of ther tips to the emloer. The other comainlleed off the clock work, llegal dedctns, and minim wage volns. Thisase was ot assigned to an inveigator unl May, over 22 moth fter the 2003 comaint was receved. The WHD inveigator assigned to thisasted tht the delay in the casassignmet was ecause of backlo t the Nashvlle Distrct Offce tht has since ee reolved. WHD cocted ll inveiga and found tht 438 emloee were dapproxitel $230,000 in back wag for minim wagand overtme volns and the required tip ool. Although tip ool re ot llegal, WHD determined tht the emloer’sip ool was llegaecause the company desited the mo into businss ccoun. Frther, the frm volted chld labor l by llowing inor under 16 old to work more than 3 ho chool days. The emloer disagreed tht the tip ool was llegaanted th revus WHD inveigator hd told hm tht was cceable. The emloer agreed to pay back wage for the minim wagand overtme volns, buot the wag tht were collected for the tip ool. WHD informed the emloer thpartiaback wag wold ot cceted and thisase was cloed. Iformddnal casan e founin appix II. WHD’s comaininke rocess, coians, and other inveigave tool re ineffectve and ofte revet WHD from resping to wage theft comain in mel and thorough manner, leing thousan of low wage workererable to wage theft. Specll, we found tht WHD ofteai to record comain in abasan oor comaint- inke rocess oteialliscoag emloee from fing comains. For exale, of or 10 undercover wage theft comain subtted to WHD were ever recorded in the dabase, incling comainlleging thunderage chldre were oeringrdouschinering chool hos. WHD’s coia rocess is ineffectve ecausinanyas, f the emloer doe ot mmediatel agree to pay, WHD doe ot inveigate comainrther or comel payme. I dd, WHD’s oor record-keepingke WHD appetter t reolving coians than ctuall is. For exale, WHD’s theast regi, whch handled ercet of coians recorded by the ag inisr 2007, has ol of ot recording unsuccessl coians in the WHD dabas. Finall, we found WHD’s rocess for handling inveigans and other -coians were freqtl ineffectve ecause of signiant delays. Oce comain were recorded in WHD’sabasanassigned as ase to an inveigator, the were ofte dequatel inveigated. comainan ttemt to fle comainbure discoaged by WHD’s comaininke rocess and eveuall give up. Rgarding WHD’s record-keepingaire, we found tht WHD doe ot hve consisterocess for docmeing and trcking comains. Thisas resulted in siuans where WHD inveigator loe trck of the comain theve receved. According to WHD ol, inveigator hold eter comain into WHD’sabasand ether handle them mmediatel as coians or refer them to managemet for ssible inveigan. However, everl of oundercover comain were ot recorded in the dabase, eve fter the emloee hspoke to an inveigator or fled wrtte comain. This is partrl troubing in the case of or chld labor comaint, ecaust rais the ssibi tht WHD is ot recording or inveigaing comain cocerning the well- ing ansafet of the mot verable emloees. Emloeeay eleve tht WHD is inveigaing ther case, whe inct the inform the rovded over the hoe or eve in wringas ever recorded. Since there is o record of thee cas in WHD’sabase, is ssible to kow how many comain re reorted buever inveigated. According to everl WHD Distrct Drector, in coians where the emloer refus to pay, ther offceck the rerce to inveigate frther or comel paymet, cotribuing to the faire we deed inundercover te, cas, anissale. Whe an emloer refus to pay, inveigatoray recommed tht the case elevted to ll inveiga, bueverl WHD Distrct Drector and feld ff told us WHD lck the rerce to coct an inveiga of ever comainand focus rerce inveigaing comain ffectingnuer of emloee or resulting ine dollmoun of back wage collectns. Ccting ll inveiga llow WHD to de other volns or other ffected emloee, ttemt to otiate back wagpaymet wth the emloer and, f the emloer coinu to refuse, refer the case to the Soltor’s Offce for ligan. However, in ome coians, the emloer is able to vopaying back wag si by refusing. Whle WHD inform comainan of ther right to fle suiagainst ther emloer to recover back wag, is unkel tht mot low wage workerve the means to hre an ttor, leing them wth lttle recoe to oain theback wags. comainis assigned to an inveigator, ant least oe offce llow inveigator 10 days to reolve coians, whch may ot llow tme for ddnal follow-up work to erformed. WHD ff ine feld offce told us the re lted to three unanswered telehoe cll to the emloer efore the re required to dro the casandvise the comainant of hisight to fle suit to recover back wags. Sff in everl feld offce told us tht the re ot ermtted to mke site visi to emloer for coians. WHD inveigator re llowed to dro coians whe the emloer deni the llegans and WHD ol doe ot require thinveigator revew emloer record in coians. Ie cas, the emloee ted tht he thought the businssas ing bankrup. WHD dropped the casing tht the emloer declred bankruptc aninformed the emloee of hisight to fle te lsuit to recover back wags. Bankruptc cort record how tht the emloer hot fled for bankruptc, and we cormed tht the emloer was ll in businss in Decemer 2008. Oe WHD inveigator told us this ot ecessa to ver bankruptc record ecause coians re dropped whe the emloer refus to pay, regardless of the reas for the refusa. Oundercover te anintervewth feld ff deed erus record-keeping fl in whch mke WHD appetter t reolving coians than ctuall is. For exale, WHD’s theast regi, whch handled ercet of coians recorded by WHD inisr 2007, has ol of ot recording inveigave work erformed o unsuccessl coians in the dabas. WHD ff told us thf emloer do ot agree to pay back wag, cannot e locted, or do ot answer the telehoe, the coia work erformed wll ot e recorded in the dabas, ming appas though thee offce re able to reolve rl ll coians successlly. Iflted coia successte re rolemc for WHD managemet, whch us this inform to determine the effectvess of WHD’s inveigave efforts. ome offceth this ol, the comaint tht the coiaas based o wold e recorded in WHD’sabas. However, the comaint wold appas thougt hever ee inveigated, ecause the inveigave work and the otcome of the coia wold ot e recorded in the dabas. Other offce do ot eter the comaininto the dabas. iniiall agree to pay in coia but reis romise, WHD inveigatorot change the otcome of the cloed casin WHISARD to how tht the emloee dot receve back wags. Whle ome inveigatorait for roof of paymeefore closing the coia, other told us tht the cloe coians as oo as the emloer agree to pay. Eve f the emloee lter tell the inveigator tht he has ot ee paid, inveigator told us the do ot change the otcome of cloed casin the WHD dabas. WHD pubcl reort the totback wag collected and the nuer of emloee receing back wag, but theis re overted ecausan unow nuer of coians recorded as successll reolved in the WHD dabase dot ctuall result in the comainant receing the back wag. Theoor record-keeping ctce rere signiant l of the pu we used to elect oissale ecause the nuer of coians ctuall erformed by WHD cannot e determined and coians recorded as successll reolved may ot hve resulted in back wag for the emloees. A result, the erceage of inadequate coians iskeligher than the faire rte eted insale. We found th5.erce of coians insale were inadequatel coiated ecause WHD failed to ver the emloer’s claim tho vol occrred, cloed the casfter the emloer dot ret hoe cll, or cloed the casfter the emloer refused to pay back wags. However, we found tht many of the coians recorded in WHD’sabase were dequatel inveigated. Oe exale of successl coia involved comainlleging thrm was ot payinginim wag. The comaint was assigned to an inveigator the same day t wasled in Stemer 2007. The WHD inveigator cocted the ower, who dmtted the vol anagreed to pay back wag of $1,. The case was coclded the same day whe the inveigator oained copy of the comainan’s check from the emloer anspoke to the comainant, corming tht he was able to cash the check and hd receved his back wags. robabi rocedre based oandom electns, osale ise of nuer of sale tht we might hve drn. Since ech sale cold hve rovded dfferet ete, we expressr codece in the recisi of opartsale’s result as ercet codece intervl (e.g., us or minus erceagin). This is the intervl tht wold coain the ctuapue for 9 ercet of the sale we cold hve drn. The 9 ercet codece intervsurrouningsale of inadequate inveigansang from 206 to 1,19aire in the pun. We found WHD’s rocess for handling inveigans and other - coiansas freqtl ineffectve ecause of signiant delays. However, oce comain were recorded in WHD’sabasanassigned as ase to an inveigator, the were ofte successll inveigated. Almot 19 erce of -coians insale were inadequatel inveigated, inclingas tht were ot iniiated unl more than 6 moth fter the comaint was receved, cas cloed fter an emloer refused to pay, and cas tht took over or to comlete. I dd, eveasailed two of or tes. Reason why non-conciliation was inadequate Cases not initiated within 6 months of complaint Case closed due to employer’s refusal to pay Cases with violations found that were not referred to Labor’s Office of the Solicitor for litigation Cases taking more than one year to complete Cases where WHD failed to review employer records Six of the cas insale failed ecause the were ot iniiated unl over 6 moth fter the comaint was receved. According to WHD offia, -coians hold iniiated wthin 6 moth of the dte the comainisled. Tmel comlet of inveigans by WHD is ortanecause the te of lns for recover of wag under the FLSA is from the dte of the emloer’saire to pay the robabi rocedre based oandom electns, osale ise of nuer of sale tht we might hve drn. Since ech sale cold hve rovded dfferet ete, we expressr codece in the recisi of opartsale’s result as ercet codece intervl (e.g., us or minus erceagin). This is the intervl tht wold coain the ctuapue for 9 ercet of the sale we cold hve drn. The 9 ercet codece intervsurrouningsale of inadequate inveigansang from 2, to ,827 faire in the pun. correct wags. Specll, this means tht everay tht WHD delays an inveiga, the comainan’sisk of ecoming ineligible to collect back wag increass. Ie of osale cas, WHD letter to comainant 6 moth fter his overtme comaint wasled ing tht, ecause of backlo, cteekeis ehlf. The letter reqted tht the comainaninform WHD wthinbusinssays of whether he inteded to tke te ctn. The case fle how in tht the comainant respded to WHD. Oe moth lter, WHD assigned the comaint to an inveigator ant the comainananother letter ing thf he dot respd wthinssays, the case wold e cloed. WHD cloed the case o the same day the letter was . Or casisussed above anin appix II inclde exale of comain ot inveigated for over r, cas cloed based o unvered inform rovded by the emloer, businssth ret volns tht were ot fll inveigated, and cas dropped ecause the emloer dot ret telehoe clls. For exale, ine cas, WHD found tht 21 emloee were dt least $66,000 in back wag for overtme volns. Throughot the inveiga, the emloer was uncooerve and resisted roving payroll record to WHD. At the ed of the inveiga, the frm agreed wth WHD’sinings anromised to pay back wag, but the topped resping to WHD. The emloee were ever paiback wag and over r lter, the Soltor’s Offce decded ot to pusue liga or any other ct in part ecause the case was considered “signiantl old.” The te of lns for recover of wag under FLSA and the Dis Bco Act is from the emloer’saire to pay the correct wags. 29 U.S.C. §55. For wllfl volns, in whch the emloer kew ctns were llegal or cted reckless in determining the lega of ctns, the te of lns iss. Federl cortve eforced the te of lns eve f Labor is inveigaing comain. Shandelman v. Schuman, 92 F. Supp. 334 (E.D.Pa.0). unl over ofter the comaint was receved, incling chld abor comainffecting over 0 minors. Because the e of lns to collect back wag under FLSA is, WHD is ing comainan t risk of collecting frct of the back wag the wold hve ee able to collect t the tme of the comain. WHD o failed to comel record and other inform from emloers. Whle WHD Rginal Administrtor re legall able to issusubpoenas, WHD has ot eteded this abi to inuainveigator, who therefore ded o emloer to rovde record and other docme voluny. Ias where pubc record re aiable to ver emloer teme, WHD inveigator do ot hve certain tool tht wold fte ccess to thee docmes. For exale, we used pubcl-aiable oine dabase, Pubc Access to Crt ElectroniRecord (PACER), to determine than emloer who claimed to hve fled for bankruptcot ctuall do. However, there is o evdece in the case fle tht the WHD inveigator erformed this check. WHD offia told us th inveigator do ot receve training how to uspubc docmerche and do ot hve ccess to dabas coaining this inform such as PACER. We found tht, oce comain were recorded in WHD’sabasanassigned as ase to an inveigator in melanner, the were ofte successll inveigated. Aisussed above, WHD doe ot record ll comain in abasand discoag emloee from fing comain, ome of whch may signiant labor volns suiable for inveigan. I dd, manyas re delayed moth efore WHD iniiate an inveigan. However, osale deed manyas tht were dequatel inveigated oce the were assigned to an inveigator. Specll, 81.ercet of the -coians insale were dequatel inveigated. Oe exale of successinveiga involved comainlleging thrm was ot paying roer overtme was assigned to an inveigator the same day t wasled in Al 2007. The WHD inveigator revewed payroll record to determine tht the frm owed the comainanback wags. The case was coclded wthin 3 moth whe the inveigator oained copy of the comainan’sashed check, roving tht he hee paid his ross back wag of $184. workerind themelve deingth an ag cocered abt rerce ns, wth ineffectve rocess, and wthot certain tool ecessa to erform tmel and effectve inveigans of wage theft comains. Ufortunatel, fr too ofte the result is unscrupulous emloering dvanage of or country’s low wage workers. Mr. Cairman and Memer of the Commttee, this cocldeteme. We wold leased to answer anyns th or other memer of the commttee mayve t thisme. For frther inform abt this temony, lease coct Greor D. Ktz t (202) 12-6722 or ktz@ga.gov or Jonathan Meer t (214) 777- 766 or meer@ga.gov. Iuaing ke cotribuns to this temony inclded Er Ael, Christoher Bckle, Carl Brde, Sfee Cagie, Ranll Cole, Merto Hll, Jennifer Hffman, Bba Lewis, Jeffer McDermott, Adrew McItoh, Sandr Moore, Adrew O’Cnnell, Gloria Pro, Rert Roder, Ramo Rodriguez, Si Schwrtz, K Self, and Daniel Silva. Cct in for or Offce of CngressinaRelns and Pubc Affaiay e found o the laspage of this temony. To revew the effectvess of WHD’s comaininke and coia rocess, GAO inveigator ttemted to fle 11 comain abt 10 fctus businss to WHD distrct offce in Bltmore, Man; Brmingm, Alabaa; Dllas, Texas; Mia, Flora; San Joe, Cafornia; and WeCovina, Cafornia. Thee feld offceandle 13 ercet of ll cas inveigated by WHD. The comain we fled wth WHD inclded minim wage, laspaycheck, overtme, and chld labor volns. GAO inveigatorained undercover ddress anhonuer to as oth comainan and emloer in thecenas. A part of or overll assssmet of the effectvess of inveigans cocted by WHD, we oained ananazed WHD’s Wagand Hor Iveigave Support anRorting Dabase (WHISARD), whch coained 32,323 cas coclded etwee Octoer 1, 2006 anStemer 30, 2007. We anazed WHD’s WHISARD dabasand determined t was sufftl reliable for pu of oauaninveigave work. We anazed andom robabi sale of 11 coians and 11 - coians to cotribute to or overll assssmet of whether WHD’s rocess for inveigaing comain re effectve. Because we followed robabi rocedre based oandom electns, osale re oe of nuer of sale tht we might hve drn. Since ech sale cold hve rovded dfferet ete, we expressr codece in the recisi of the partsale’s result as ercet codece intervl (e.g., us or minus erceagin). This is the intervl tht wold coain the ctuapue for 9 ercet of the sale we cold hve drn. To determine whether an inveigaas inadequate, we revewed case fle and cormed detai of elected casth the inveigator or techniian assigned to the cas. Isale te, coians were determined to inadequate f WHD dot successll iniiate inveigave work wthin 3 moth or dot comlete inveigave work wthin 6 moths. No-coians were determined to inadequate f WHD dot successll iniiate inveigave work wthin 6 moth, dot comlete inveigave work wthinr or dot refer cas in whch the emloer refused to pay to Labor’s Offce of the Soltor. Both coians an-coians were determined to inadequate f WHD dot coct the emloer, dot correctl determine coveragunder federl lw, dot revew emloer record, or dot compute anassss back wag whe approiate. coctinglk-throug of inveigave rocessth managemeanintervewing WHD offias. We gathered inform abt distrct offce ol aninual cas by cocting site visi t the Mia and Tpa, Floristrct offce, and cocting telehointervewth techniians, inveigator and distrct drector in 23 feld offce and hedquarter offia in Wasingto, D.C. We spoke wth Labor’s Offce of the Soltor in Dllas, Texas and Wasingto, D.C. To decro-level d WHD comain, we anazed d for cas cloed etwee Octoer 1, 2006 anStemer 30, 2007 by regi, distrct offce and case otcome. To deas of inadequate WHD respns to comain, we d-mined WHISARD to de cloed cas in whch signiant delay occrred in resping to comaint (casing more than 6 moth to iniiate or 1 r to comlete), an emloer cold ot e locted, or the case was dropped whe an emloer refused to pay. We oained ananazed WHD case fle, intervewed WHD offia, and revewed pubcl aiable d from oine dabas and the Departmet of Treasuy’s FinaniaCme Eforcemet Network to gather ddnainform abt thee cass. We intervewed comainan who cocted GAO drectl or were referred to us byabor dvoc roups to gather inform abt WHD’s inveiga of ther comains. Table rovde summ of te ddnal cas of inadequate Wagand Hor Disi (WHD) inveigans. Thee cas inclde insance where WHD dropped cas fter (1) emloer refused to cooerte wth an inveiga, (2) WHD deed ol but failed to force emloer to pay emloee ther owed wag, and (3) an emloer lleed t was bankrupt whe inct the emloer was ot. Minimum Wage (FLSA) Complainant alleged he was not paid minimum wage. WHD attempted to contact the employer to substantiate the claim, but the employer did not return WHD’s calls. Case was closed and the employee was informed of his right to file private litigation. We were able to make contact with the employer in February 2009. Minimum Wage (FLSA) Employer would not make a commitment to WHD to pay $937 in back wages. WHD closed the case and recorded that the employer was in compliance with labor laws. Minimum Wage (FLSA) Employer admitted owing wages but refused to pay because the employee had been involved in a vehicular accident in a company vehicle. WHD requested that employer comply with labor laws in the future, but employer refused. The WHD investigator stated that the case was closed and the employee was informed of his right to file a private lawsuit. Failure to Overtime (FLSA) Employer admitted to WHD that employees were not paid overtime and he did not know how much they were paid per hour. One employee told the investigator that the employees had been threatened and another source informed the investigator that the employer had threatened employees with a machete so they would lie during WHD interviews, but the investigator still determined that the employer’s violations did not appear to ase was rected inveiga into the businss based o ip receved from comettor, ot the comaint of single worker. The Govermet Accounabi Offce, the aut, evua, aninveigave rm of Cngress, exis to support Cngress in meeting consnal respnsibi and to hel rove the erformance anccounabi of the federovermet for the Ameran eole. GAO exain the use of pubc funs; evuate federro anols; anrovde anays, recommens, and other assisance to hel Cngresske informed oversight, ol, and funing decisins. GAO’s commtmet to ood overmeis reflected in core v of ccounabi, inte, and reliabiy. The fasteand easit way to oain copi of GAO docme o cois through GAO’s We site (www.ga.gov). Ech weekday fteroo, GAO We site ewl released reort, temony, and correspdece. To hve GAO e-mai ist of ewl ted rodct, o to www.ga.gov anelect “E-mail Utes.” The ce of ech GAO pub reflect GAO’s ctual cot of rodct and distribu and de the nuer of pag in the pub and whether the pub is inted in color or ck and whte. Pring and ordering inform is ted o GAO’s We site, http://www.ga.gov/ordering.htm. Plce order bylling (202) 12-6000, toll free (866) 801-7077, or TDD (202) 12-2. Orderay paid for using Ameran Express, Discover Card, MasterCard, Visa, check, or mo order. Call for ddnainformn. In addition to the contacts named above, individuals making key In addition to the contacts named above, individuals making key contributions to this report included Erika Axelson, Christopher Backley, contributions to this report included Erika Axelson, Christopher Backley, Carl Barden, Shafee Carnegie, Randall Cole, Merton Hill, Jennifer Carl Barden, Shafee Carnegie, Randall Cole, Merton Hill, Jennifer Huffman, Barbara Lewis, Jeffery McDermott, Andrew McIntosh, Sandra Huffman, Barbara Lewis, Jeffery McDermott, Andrew McIntosh, Sandra Moore, Andrew O’Connell, Gloria Proa, Robert Rodgers, Ramon Moore, Andrew O’Connell, Gloria Proa, Robert Rodgers, Ramon Rodriguez, Sidney Schwartz, Kira Self, and Daniel Silva. Rodriguez, Sidney Schwartz, Kira Self, and Daniel Silva.
The mission of the Department of Labor's Wage and Hour Division (WHD) includes enforcing provisions of the Fair Labor Standards Act (FLSA), which is designed to ensure that millions of workers are paid the federal minimum wage and overtime. Conducting investigations based on worker complaints is WHD's priority. On March 25, 2009, GAO testified on its findings related to (1) undercover tests of WHD's complaint intake process, (2) case study examples of inadequate WHD responses to wage complaints, and (3) the effectiveness of WHD's complaint intake process, conciliations (phone calls to the employer), and other investigative tools. To test WHD's complaint intake process, GAO posed as complainants and employers in 10 different scenarios. To provide case study examples and assess effectiveness of complaint investigations, GAO used data mining and statistical sampling of closed case data for fiscal year 2007. This report summarizes the testimony (GAO-09-458T) and provides recommendations. GAO found that WHD frequently responded inadequately to complaints, leaving low wage workers vulnerable to wage theft and other labor law violations. Posing as fictitious complainants, GAO filed 10 common complaints with WHD district offices across the country. These tests found that WHD staff deterred fictitious callers from filing a complaint by encouraging employees to resolve the issue themselves, directing most calls to voicemail, not returning phone calls to both employees and employers, and providing conflicting or misleading information about how to file a complaint. An assessment of complaint intake processes would help ensure that WHD staff provide appropriate customer service. To hear clips of undercover calls illustrating poor customer service, see http://www.gao.gov/media/video/gao-09-458t/ . According to WHD policies, investigators should enter all reasonable complaints into WHD's database. However, even though all of GAO's fictitious complaints alleged violations of the laws that WHD enforces, 5 of 10 complaints were not recorded in WHD's database. In addition, WHD policy in one region instructs staff not to record the investigative work done on small cases in which the employer refuses to pay, making WHD appear better at resolving these cases than it is. Reassessing its processes for recording complaints would help WHD ensure that all case information is available. Similar to the 10 fictitious scenarios, GAO identified 20 cases affecting at least 1,160 real employees whose complaints were inadequately investigated by WHD. Five of the cases were closed based on false information provided by the employer that could have been verified by a search of public records, such as bankruptcy records, but WHD investigators do not have access to publicly available or subscription databases. In another case, the employer claimed that the company did not meet the income requirement to be covered under federal law but did not provide documentary evidence. WHD investigators do not have access to income information collected by the Internal Revenue Service and were unable to verify the employer's claim. Obtaining more research tools and implementing information sharing processes with other agencies would assist WHD in verifying employer-provided information. GAO's overall assessment found ineffective complaint intake and investigation processes. WHD officials often told GAO that WHD lacks the resources to conduct an investigation of every complaint, allowing employers in some small cases to avoid paying back wages simply by refusing to pay. GAO found that WHD's investigations were often delayed by months or years. Monitoring the extent to which WHD staff are able to handle the volume of complaints would provide assurance that WHD has sufficient resources available. Under FLSA, the statute of limitations is 2 years from the date of the violation, meaning that every day that WHD delays an investigation, the complainant's risk of becoming ineligible to collect back wages increases. However, in several offices, backlogs prevent investigators from initiating cases within 6 months. Suspending the statute of limitations during a WHD investigation would prevent employees from losing back wages due to delays.
Natural gas gathering is the collection of gas from the wellhead for delivery to the processing plant or transportation pipeline. Compared with transportation pipelines, gathering lines are generally smaller in diameter and shorter in length, and require relatively lower pressure to push the gas through the line. According to a recent report, over 2,100 companies perform gathering services in the states that produce natural gas. Natural gas storage involves the transfer of natural gas from the production field to a depleted underground reservoir or other holding facility for later use. Gas is generally injected into storage facilities during warmer months, when demand is lower, and withdrawn during winter months. Traditionally, pipeline companies use storage to manage and balance the movement of gas throughout their systems. Local distribution companies—the companies that deliver gas from the interstate pipeline to the ultimate end-user—have a critical need for storage because they must provide gas on demand to residential end-users and other customers who lack the ability to switch to another fuel when gas is not available. According to the Energy Information Administration, as of December 31, 1993, a total of 103 operators were providing storage service in the United States. These operators included pipeline companies, local distribution companies, and independent marketers. Market hubs are areas where several pipelines connect, generally near a production area, storage field, or major market area. A relatively new phenomenon in the industry, hubs create central points where many buyers and sellers can come together to obtain natural gas and a variety of services. These services can be physical, such as transportation, storage, or the transfer of gas from one pipeline to another, or they can be contractual, such as the trading of titles to gas supplies. In theory, market hubs can improve the efficiency and flexibility of the interstate gas market by increasing producers’ and end-users’ access to each other and by reducing transaction costs when they make deals. According to FERC, as of July 1994, there were 19 market centers operating in the United States, and another 11 are scheduled to be opened by the end of 1995. Each of these hubs has an administrator who oversees its operation and performs a variety of functions, such as tracking the exchange of titles to gas supplies, invoicing customers for services, and allocating pipeline capacity and services at the hub when they are in short supply. In May 1994, FERC announced its policy on interstate pipeline companies’ gathering affiliates. In a series of seven orders, the Commission concluded that it does not have the authority to regulate the rates charged by affiliates. However, FERC added that it could use its authority to regulate the rates charged by interstate pipeline companies for gathering, transportation, and other services to regulate a gathering affiliate if the affiliate and its parent pipeline company act together in a collusive and anticompetitive manner. This policy has generally been accepted by pipeline companies, local distribution companies, and end-users. While producers are generally reserving judgment on the new policy, several are concerned about its effect on their ability to negotiate fair agreements with gathering affiliates. FERC has traditionally included the costs of gathering services provided by interstate pipeline companies in the rates it approves for such companies. FERC does not regulate the rates for gathering services provided by other entities, such as producers. Most of these unregulated gatherers, who provide almost 70 percent of the gathering services in the United States, are free to negotiate with their customers on the rates, terms, and conditions of service. Since the early 1990s, several interstate pipeline companies have created affiliates to provide their gathering services. These pipeline companies asked FERC for permission to sell their gathering facilities to the new affiliates. The requests created a need for FERC to clarify its policy on gathering affiliates. On May 27, 1994, FERC issued a series of seven orders that, taken together, define its new policy on gathering. In these orders, FERC elaborating that it does not have the authority to regulate the rates, terms, and conditions of the gathering services provided by interstate pipeline companies’ affiliates. As a result, when pipeline companies sell their gathering facilities to their affiliates, the rates these affiliates charge will no longer be under FERC’s regulation. However, FERC also said it would do the following: Require pipeline companies, before they sell their gathering facilities to an affiliate or unregulated third party, to negotiate new contracts with existing customers. If the pipeline company is unable to reach agreement with a customer, it must offer a “default contract” that reflects the rates, terms, and conditions of service offered by the independent gatherers in the region. If the customer refuses the default contract, it loses its guarantee of continued service. FERC imposed this condition to protect existing customers that had entered into arrangements with pipeline companies for gathering services expecting that these services would be regulated by FERC. Assert jurisdiction over gathering affiliates if it finds, upon a customer’s complaint, that the affiliate and its parent pipeline company have acted together in a collusive and anticompetitive manner. For example, FERC could assert its jurisdiction, as part of its regulation of the pipeline company, if a gathering affiliate requires a customer to transport gas on the parent company’s pipeline. Under the new policy, FERC will regulate a gathering affiliate only if the affiliate acts with its parent company in an anticompetitive manner. In analyzing its jurisdiction, FERC asserted that sections 4 and 5 of the Natural Gas Act give it the authority to regulate gathering performed by natural gas companies (i.e., interstate pipeline companies) “in connection with” interstate transportation. Gathering affiliates, because they perform only a gathering function, are not natural gas companies as defined by the act. Thus, FERC reasoned that gathering affiliates are not under its jurisdiction. FERC determined, however, that it may assert jurisdiction when the pipeline company and its gathering affiliate act together to discriminate because they are then effectively acting as a single natural gas company involved in the interstate transportation of natural gas. The Interstate Natural Gas Association of America, the trade association that represents interstate pipeline companies, has stated that it is pleased with FERC’s new policy and believes the policy will promote competition and regulatory certainty. As a result of this policy, pipeline companies will be able to sell their gathering facilities to affiliates, which, in turn, can set rates that are competitive with those set by unregulated gatherers. According to an association official responsible for policy issues, many pipeline companies plan to sell their gathering facilities to either affiliates or independent third parties in response to the new policy. The representatives of local distribution companies and end-users we interviewed expressed little concern about FERC’s new policy on gathering. An official of the American Gas Association, which represents, among others, larger distribution companies, stated that the association has received no complaints from the local distribution companies among its members about the new policy. According to a representative of municipal distributors, smaller distribution companies have little interest in the issue of gathering. Distribution companies and the residential, commercial, and industrial end-users to whom they deliver gas are more concerned about the price of gas supplies, which is determined by the market. Because gathering rates affect the division of proceeds from gas sales, they concern only producers and gatherers. In contrast to other segments of the industry, producers generally believe that it is too early to determine the effects of FERC’s new policy. According to a representative of the Natural Gas Supply Association, the trade association representing major producers, several producers are concerned that they will not be able to get fair rates, terms, and conditions of service in their negotiations with pipeline companies or in default contracts. Although producers are generally pleased that FERC will review the sale of gathering facilities to affiliates on a case-by-case basis, several believe that unless FERC establishes clearer guidelines on the terms of the default contracts, the affiliates that have market power could impose significantly higher rates for gathering services for existing wells. In cases in which a producer and a gathering affiliate cannot reach a new agreement, producers would like to continue the rates, terms, and conditions of service that existed when they originally drilled the well. Some producers have asked FERC to reconsider its new gathering policy. A group of six major producers, known as the “Indicated Parties,” and the Independent Petroleum Association of America have filed motions with FERC objecting to the new policy and requesting a new hearing. In the grounds for rehearing, the Indicated Parties and the association contend, among other things, that (1) FERC should regulate gathering affiliates, (2) FERC erred in approving pipeline companies’ sale of gathering facilities to their affiliates without showing that the relevant gathering markets were competitive, and (3) the provision requiring default contracts needs reconsideration or clarification. In addition, the Indicated Parties provided FERC with an offer of settlement in one of the seven cases. In this offer, the Indicated Parties agree to accept the sale of gathering facilities by the pipeline company to its affiliates if, among other things, FERC retracts its determination that it lacks jurisdiction over gathering affiliates. On November 30, 1994, FERC denied the requests for rehearing, but it also amended the requirements of default contracts. According to the new guidelines, when a pipeline company sells its gathering facilities to an affiliate or independent gatherer, existing customers will be able to purchase gathering services from the new provider at the current cost-of-service rates for a period of up to 2 years. FERC’s jurisdiction over storage is limited to the storage of gas transported in the interstate market. As a result, FERC has regulatory authority primarily over storage facilities owned by interstate pipeline companies. According to the Energy Information Administration, these facilities hold about 61 percent of the total gas stored in the United States. As in the case of gathering and transportation, FERC has traditionally set the rates for storage according to a storage operator’s cost of providing service. Since 1992, however, FERC has approved the use of market-based rates for storage services in six orders. In each case, FERC has required the storage provider to show that it lacks the power to set rates above competitive levels in the local storage market. To do so, the storage provider must demonstrate that there are good alternatives to its service. FERC has defined a “good alternative” as one that “is available soon enough, has a price that is low enough, and has a quality high enough to permit customers to substitute the alternative” for the storage provider’s service. None of the industry representatives we interviewed expressed concern over FERC’s use of market-based rates in areas where the storage market is competitive. According to a FERC official, storage customers expressed no objections in the six cases in which FERC has approved market-based rates. FERC currently has no regulations specifically aimed at market hubs. FERC regulates the rates charged by an interstate pipeline company for the services, such as transportation and storage, that it provides at a hub. However, the rates for these services are set in the same way—on the basis of a company’s cost of providing the service—as the rates for the services the pipeline company provides outside the market hub. According to an official with FERC’s Office of Pipeline Regulation, as of October 1994, a few pipeline companies have asked FERC to let them vary from these rates so that they can compete more effectively at market hubs. However, the rates they seek to use are not market-based. Instead, these rates, known as “market center rates,” are still derived from the existing cost-of-service rates. However, they are generally lower than the full cost-of-service rates. FERC officials believe that it is too early in the development of hubs to determine what, if any, regulatory role the Commission should play or what rates it should approve. The structure and workings of market hubs are still evolving. As a result, according to FERC’s Director of the Office of Pipeline Regulation, FERC has not dealt with the issue. Most of the industry representatives with whom we spoke agreed that it is too early to determine how market hubs should be regulated, if at all. According to an official with the American Gas Association, none of its members have voiced complaints about operations at the hubs. Moreover, the association maintains that FERC should generally rely on market forces unless it finds compelling evidence of market failures. However, some marketers have expressed concern that some hubs are being administered by competing marketers. According to those concerned, these hub administrators may have an incentive to use their access to information about competitors’ deals at hubs to their competitive advantage. Others in the industry believe that conflicts of interest may not be a problem. They contend that the hub administrators will be reluctant to exploit their access to information because, if they do, no one will be willing to use their hub. As part of an overall strategy to work in a more collaborative manner in developing energy policy, DOE plans to participate in FERC and state regulatory proceedings. DOE officials say they will be sensitive to the states’ authority when interacting with state governments. Although DOE has not participated in a proceeding involving natural gas issues, it has participated in several FERC and state proceedings involving electric utility issues. DOE has other ongoing efforts, including sponsorship of conferences and workshops, intended to contribute to its overall strategy. DOE and FERC have also established a working group to increase mutual understanding on natural gas and oil issues. In October 1993, DOE established the Utility Commission Proceedings Participation Program as a mechanism through which it can participate in FERC and state regulatory proceedings on energy. This program provides for DOE’s participation when DOE’s technical and policy expertise could lead to a greater understanding of the available policy options. DOE’s participation in regulatory proceedings will consist primarily of submitting written comments and testimony and, in some cases, having DOE officials testify as expert witnesses. DOE announced in February 1994 that it would use its participation program as a means to implement its Domestic Natural Gas and Oil Initiative. This initiative, announced in December 1993, includes proposals for, among other things, reforming regulatory structures at both the federal and state level that may be inhibiting a more efficient use of natural gas and electric power. At the federal level, the initiative targets improving the use of gas pipeline capacity and encouraging the full use of the electric power transmission system. At the state level, the initiative focuses on potential regulatory reforms, such as revising pricing strategies for natural gas and electric power and ending subsidies for specific fuels. DOE officials responsible for the Department’s participation in regulatory proceedings say that they do not intend to be prescriptive or adversarial in their interactions with federal or state agencies. Rather, they stated that they want to draw on their technical expertise and act as advocates of particular policies on energy. DOE officials also pointed out that several of the Department’s key executive-level staff involved in this effort have extensive experience in federal and state energy regulation and thus have an appreciation of the issues of federal and state authority that frequently arise in the energy arena. Although DOE has yet to participate in a regulatory proceeding involving natural gas issues, it has participated in several FERC and state regulatory proceedings involving electric utilities. DOE officials explained that they have not yet decided on a strategy for participating in proceedings involving natural gas. DOE has participated in several FERC proceedings within the past year. For example, it submitted comments in two electric utility cases before FERC involving cost recovery issues resulting from the decommissioning of nuclear power plants. The state proceedings in which DOE has participated primarily involved states’ efforts to examine proposed changes to the regulation of electric utility companies. In some cases, DOE submitted written comments; in other cases, DOE officials appeared as expert witnesses before state utility commissions. In addition to its strategy for participating in regulatory proceedings, DOE’s other ongoing efforts may also help the Department work in a more collaborative manner with federal and state regulators in developing energy policy. These efforts include sponsoring annual conferences and participating in industry meetings and workshops. For example, since 1992 DOE has cosponsored an annual conference with the National Association of Regulatory Utility Commissioners to discuss issues facing the natural gas industry. To carry out the goals of its Domestic Natural Gas and Oil Initiative, DOE established a working group with FERC intended to facilitate discussions between the two agencies and allow a better understanding of the goals and objectives of each other’s programs and policies. However, because FERC is responsible for regulating electric and gas utilities, the working group will be restricted from discussing any proceedings ongoing before FERC. The working group has had one meeting at which the two agencies mainly provided status reports on their current and planned efforts involving electric, gas, and hydropower issues. Officials from both agencies who participated in the meeting expressed satisfaction with the working group and agreed that it should continue to meet. As requested, we did not obtain written agency comments on this report. However, we discussed the information in the report with various FERC officials, including the Director of the Office of Pipeline Regulation, the Director of the Office of Economic Policy, and the General Counsel. We also discussed the information in the report with DOE’s Director of Natural Gas Policy and officials responsible for DOE’s participation in FERC and state regulatory proceedings. To ensure that we characterized industry’s views accurately, we also discussed information in the report with officials from the industry associations mentioned in this letter. FERC, DOE, and industry officials all agreed with the factual material presented; they suggested minor technical changes that we incorporated where appropriate. We performed our work between March and November 1994 in accordance with generally accepted government auditing standards. Appendix III describes the scope and methodology of our review in detail. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days after the date of this letter. At that time, we will send copies to congressional energy committees and other interested parties. We will also make copies available to others on request. If you or your staff have any questions or need additional information, please call me at (202) 512-3841. Major contributors to this report are listed in appendix IV. Under sections 4 and 5 of the Natural Gas Act, as amended, the Federal Energy Regulatory Commission (FERC) has traditionally regulated the rates, terms, and conditions of all services provided by interstate pipeline companies in connection with the interstate transportation of natural gas. In Northern Natural Gas Company v. FERC, a federal appeals court interpreted the act so that FERC’s authority extends to regulating the gathering services provided by interstate pipeline companies if the gathering is performed in connection with the interstate transportation of natural gas. According to a report sponsored by the Interstate Natural Gas Association of America, interstate pipeline companies and their affiliates provide about 30 percent of all the gathering services in the United States. FERC sets the rates for transportation and gathering services on the basis of a pipeline company’s cost of providing those services, which consists of the cost of facilities, expenses for operation and maintenance, and a reasonable return on investment. This approach is known as cost-of-service regulation. Before FERC’s Order 636 (described below), the costs of gathering services were incorporated into the rates that pipeline companies charged for sales and transportation service. In contrast, the rates charged for the gathering services provided by producers and other entities are not under FERC’s regulation. Generally, these providers can negotiate contracts with customers that state the rates, terms, and conditions of their gathering services. Unregulated gatherers provide most of the remaining 70 percent of the gathering services performed in the United States. In a 1992 order, FERC articulated a policy on pipeline companies’ gathering affiliates. In Northwest Pipeline Corporation, the Commission relied on an interpretation by the federal appeals court in Northern Natural Gas Company v. FERC to assert that its jurisdiction extended to pipeline companies’ gathering affiliates if the affiliates perform the services in connection with the interstate transportation of natural gas. However, the Commission added that it would not exercise its jurisdiction to regulate the rates charged by gathering affiliates except in response to a customer’s complaint that an affiliate was acting in an anticompetitive manner. This approach was referred to in the industry as “light-handed” regulation. According to an official representing pipeline companies, as a result of this decision, many pipeline companies petitioned FERC to be allowed to sell their gathering operations to affiliates so that they could set their own rates and better compete with unregulated gatherers. However, some pipeline companies were reluctant to sell their gathering facilities because they believed FERC did not clearly define when it would assert jurisdiction under its new policy of light-handed regulation. Pipeline companies and their affiliates could not be sure what rates and practices would be acceptable to FERC. In addition, gas producers that purchased gathering services from pipeline companies were concerned that, under light-handed regulation, pipeline companies would transfer their gathering facilities to affiliates to escape FERC’s regulation and then raise their rates substantially. In contrast to pipeline companies, which support the deregulation of gathering, producers wanted FERC to regulate the rates charged by affiliates in the same manner as it regulates pipeline companies’ transportation and gathering rates. Also in 1992, FERC announced Order 636, which required all pipeline companies to separate, or “unbundle,” the rates they charge for various services, including gathering. This separation was designed to increase competition and efficiency in the industry by enabling customers to purchase only the services they desire. As a result of this order, pipeline companies for the first time began to charge rates for gathering services that were independent of the rates they charged for interstate transportation. This heightened the pipeline companies’ desire to sell their gathering operations to affiliates that could set their own rates. According to industry officials, because of Order 636 and concerns about FERC’s Northwest Pipeline decision, both pipeline companies and producers wanted FERC to review and clarify its regulatory authority over gathering affiliates. On May 27, 1994, FERC restated its policy on gathering affiliates in a series of seven orders. In these orders, FERC consistently stated that it regulates the rates charged for gathering services only for gathering performed by pipeline companies or when the pipeline company and its affiliate engage in collusive and anticompetitive practices. As stated above, Order 636 separated the rates charged by pipeline companies for gathering and interstate transportation services. In this new context, FERC elaborated in the orders that it does not ordinarily have regulatory authority over pipeline companies’ gathering affiliates. Williams Natural Gas Co., 67 FERC ¶ 61,252 (1994) Superior Offshore Pipeline Co., 67 FERC ¶ 61,253 (1994) Amerada Hess Corp., 67 FERC ¶ 61,254 (1994) Mid-Louisiana Gas Co., 67 FERC ¶ 61,255 (1994) Trunkline Gas Co., 67 FERC ¶ 61,256 (1994) Arkla Gathering Services Co., 67 FERC ¶ 61,257 (1994) Eastern American Energy Co., 67 FERC ¶ 61,258 (1994). Richfield Gas Storage System, 59 FERC ¶ 61,316 (1992) Petal Gas Storage Company, 64 FERC ¶ 61,190 (1993) Transok, Inc., 64 FERC ¶ 61,095 (1993) Bay Gas Storage Company, Ltd., 66 FERC ¶ 61,354 (1994) Koch Gateway Pipeline Company, 66 FERC ¶ 61, 385 (1994) Avoca Natural Gas Storage, 68 FERC ¶ 61,045 (1994) The Chairman, Environment, Energy, and Natural Resources Subcommittee, House Committee on Government Operations, requested that we assess recent regulatory changes affecting three aspects of the industry: gathering, storage, and market hubs. In addition, the Chairman asked us to review the Department of Energy’s (DOE) plans to intervene in energy-related regulatory proceedings in the states and the extent to which DOE plans to interact with FERC in carrying out such interventions. To obtain information on FERC’s policies on gathering, storage, and market hubs, we reviewed existing industry literature and relevant FERC orders and documents. We also interviewed FERC and industry officials about these policies. To learn the opinions of various industry segments on FERC’s regulatory approaches, we reviewed comments filed by industry officials with FERC. We also spoke to various FERC officials and representatives of several natural gas trade associations, including the American Gas Association, the American Public Gas Association, the Independent Petroleum Association of America, the Interstate Natural Gas Association of America, the Natural Gas Supply Association, and the National Association of Utility Consumer Advocates. To examine how DOE plans to intervene in state regulatory proceedings, we reviewed various DOE documents and spoke to DOE officials and officials from several state utility commissions. We also spoke to DOE and FERC officials about how DOE may interact with FERC in implementing its strategy on participation. Jackie A. Goff, Senior Counsel The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed how producers, pipeline companies, and end-users view the regulatory changes affecting the collection, storage, and marketing of natural gas, focusing on the: (1) Department of Energy's (DOE) plans to intervene in energy-related regulatory proceedings; and (2) extent to which DOE plans to interact with the Federal Energy Regulatory Commission (FERC) in such interventions. GAO found that: (1) FERC will use its authority over pipeline companies to regulate an affiliate if the parent company and the affiliate act together in a collusive manner; (2) interstate pipeline companies, local distribution companies, and end-users find the new FERC policy acceptable, while producers believe that it is too early to determine the effectiveness of the new policy; (3) FERC has determined that competition is sufficient to allow storage operators to charge market-based rates; (4) no segment of the industry has objected to the use of market-based rates in locations where the storage market is competitive; (5) while FERC sets the rates for the services that interstate pipeline companies provide, FERC has agreed to allow some pipeline companies to vary their rates to compete better; (6) according to FERC officials and industry analysts, market hubs are still in the early stages of development, and it is still too early to determine what, if any, regulatory role FERC will have; (7) DOE plans to intervene or participate in energy-related regulatory proceedings when it believes its participation can result in a more comprehensive assessment of energy policy options; and (8) although FERC does not coordinate its regulatory activities with DOE, the two agencies have established a working group to ensure that their staffs interact and are aware of the goals and objectives of each other's programs and policies.
JMD has overall responsibility for managing the working capital fund and AFF. Justice’s working capital fund was created by Congress on January 2, 1975. The fund is authorized to maintain moneys from four distinct sources, or functions (see table 1). The first and primary function of the fund is to finance, on a reimbursable basis, administrative shared services provided by JMD to other components of the department and other federal agencies. The second function of the working capital fund is to collect up to 3 percent of funds collected pursuant to civil debt collection litigation activities into the fund. A third function of the working capital fund is to collect up to 4 percent of earnings from its shared services provision. Finally, the working capital fund’s fourth function is to capture expired departmental unobligated balances into the working capital fund’s Unobligated Balance Transfers (UBT) account. Because the working capital fund is a no-year account, all amounts earned or collected by the fund are available without fiscal year limitation to be used for specific authorized purposes. For example, amounts from three of the four working capital fund functions may be used for capital equipment investments and financial system improvements. The largest portion of the working capital fund comes from charges for centralized administrative and infrastructure support services and functions collected on a reimbursable basis from Justice components. The shared services provided by the working capital fund are generally commercial functions, such as data processing, publications, building services, financial operations, employee data, telecommunications, property management, and space management (see working capital fund services and support in fig. 1). While most Justice components use the working capital fund to obtain administrative shared services, there is no statutory requirement that they do so. For fiscal year 2010, Justice’s largest customers were the U.S. Attorney’s Office, JMD, the Bureau of Prisons, the Federal Bureau of Investigation, and the Drug Enforcement Administration (see customers in fig. 1). Some services are also available to other federal agencies. For example, the Department of Homeland Security (DHS) was the largest non-Justice customer of the working capital fund. The working capital fund received $38 million from DHS in fiscal year 2010 for information technology services. However, JMD officials told us that DHS is exiting the working capital fund and its remaining agreements with the fund are expected to be completed by fiscal year 2013. To set and review rates for working capital fund services, JMD develops, in the context of Justice’s budget formulation process, an annual 2-year operating plan. As a part of developing the operating plans, JMD sets the rates for its services using one of three strategies. The “dollar-per-widget” strategy aligns rates with the cost of the service provided. JMD creates an internal cost schedule of these services that lists, for example, how much a photocopy or scan job costs. This strategy is designed to bill customers for the amount of service actually used. The pass-through charge strategy is used to set rates for services that the working capital fund acquires from another provider. Customer rates are based on costs as determined by the non-Justice service providers, such as the flexible spending account services provided by the Office of Personnel Management and rent charges from the General Services Administration. The allocation strategy is applied when actual usage is more difficult to predict, such as for information technology security, e-government services, and acquisition support. Allocations are determined in various ways, such as a percentage of full-time equivalents (FTE), a percentage of budget authority, or a weighted average of both. Additionally, JMD staff factor in data such as recent cost information and expected use of a service for the coming year. To calculate the estimated cost for a specific customer, JMD reviews the customer’s actual use in prior years and expected use for the upcoming fiscal year. From that, JMD calculates a percentage to charge customers that will cover the cost of providing that level of service. All customers using a service are charged a percentage of total estimated costs throughout the year. JMD officials told us that they generally apply the dollar-per-widget and pass-through charge strategies in setting shared services rates and that they only use the allocation strategy when the other strategies would not work well. Justice established its eight-member Customer Advisory Board (CAB) in 1994 to improve customer satisfaction with the working capital fund. The CAB also advises JMD on fund management issues. The eight components represented on the CAB are the Bureau of Alcohol, Tobacco, Firearms and Explosives; Bureau of Prisons; Drug Enforcement Administration; Federal Bureau of Investigation; General Legal Activities; U.S. Attorneys; U.S. Marshals Service; and Office of Inspector General. JMD elected these eight components to be members of the CAB because they were the working capital fund’s largest customers. JMD convenes an annual meeting with CAB members at the beginning of each fiscal year to present the updated operating plan and selected purchases for the year. We have previously identified the following four key operating principles to guide the management of working capital funds. For further information about the four key principles and their underlying components, see figure 2. Clearly delineate roles and responsibilities: Appropriate delineation of roles and responsibilities promotes a clear understanding of who will be held accountable for specific tasks or duties, such as authorizing and reviewing transactions, implementing controls over working capital fund management, and helping ensure that related responsibilities are coordinated. In addition, this reduces the risk of mismanaged funds and tasks or functions “falling through the cracks.” Moreover, it helps customers know who to contact if they have questions. Ensure self-sufficiency by recovering the agency’s actual costs: Transparent and equitable pricing methodologies allow agencies to ensure that rates charged recover the agencies’ actual costs and reflect customers’ service usage. If customers understand how rates are determined or changed—including the assumptions used— customers can better anticipate potential changes to those assumptions, identify their effect on costs, and incorporate that information into budget plans. A management review process can help to ensure that the methodology is applied consistently over time and provides a forum to inform customers of decisions and discuss as needed. Measure performance: Performance goals and measures are important management tools applicable to all operations of an agency, including the program, project, or activity levels. Performance measures and goals could include targets that assess fund managers’ responsiveness to customer inquiries, the consistency in the application of the funds’ rate-setting methodology, and the billing error rates. Performance measures that are aligned with strategic goals can be used to evaluate whether and, if so, how working capital fund activities are contributing to the achievement of agency goals. A management review process comparing expected to actual performance allows agencies to review progress toward goals and potentially identify ways to improve performance. Build in flexibility to obtain customer input and meet customer needs: Opportunities for customers to provide input about working capital fund services or voice concerns about needs in a timely manner enable agencies to regularly assess whether customer needs are being met or have changed. This also enables agencies to prioritize customer demands and use resources most effectively, enabling them to adjust working capital fund capacity up or down as business rises or falls. By incorporating these principles in written guidance, agencies promote consistent application of management processes and provide a baseline for agency officials to assess and improve management processes. Moreover, agencies can use guidance as a training tool for new staff and as an information tool for customers, program managers, stakeholders, and reviewers. JMD has well-established policies and procedures for tracking and monitoring each of the four working capital fund functions to adhere to authorized purposes. JMD uses its financial management system, the Financial Management Information System (FMIS), to track moneys by project codes to distinguish among the different working capital fund functions. Additionally, Justice’s written policies direct the head of each component to maintain a financial accounting system with internal controls in place to ensure effective management and disbursement of federal funds. FMIS also supports the departmentwide fund control system; it is designed to restrict both obligations and expenditures from each appropriation or fund account to the amount available for obligation or expenditure. Since 1984, all working capital fund moneys are identified and tracked using reimbursement code numbers so that JMD can identify the source of all funds and monitor obligations established against the working capital fund’s partitioned subaccounts related to each of the fund’s four functions. Balances associated with each function are tracked in separate partitions in the working capital fund and remain available until expended. This ensures that the funds associated with the four working capital fund functions are tracked and managed to ensure that they are used in accordance with its authorities. JMD structures its reimbursable agreements with customers in a way that facilitates adherence to the Economy Act—the statutory authority underlying most of the shared services orders received from customers. For example, Justice aligns its agreements to coincide with a single fiscal year and has policies against accepting advanced funds from federal customers (instead, Justice generally receives reimbursements after providing shared services). This helps Justice and its customers comply with the Economy Act’s deobligation requirements and mitigates the risk of using appropriated funds when they are not legally available. Justice’s policies also require that JMD establish an accurate and reliable tracking system to monitor, on an ongoing and consistent basis, obligations established against reimbursable agreements for billing purposes. Justice has issued guidance on how JMD should manage payments to the working capital fund so that those amounts are accurately recorded and controlled, and ensure that anticipated and actual reimbursements for goods or services provided are properly recorded. Further, Justice’s policies govern how JMD controls and monitors shared services funds and describe the responsibilities of both the provider and customer. For example, these policies task JMD, in the role of the service provider, with the responsibility of monitoring reimbursements anticipated, earned, billed, unbilled, and collected, in relation to the agreed-upon agreement amount. Customers are responsible for monitoring the status of reimbursable services performed but not yet billed to ensure that obligations recorded are sufficient to pay for the shared services they receive. In responding to a draft of this report, Justice officials also noted that the service providers convey the status of the reimbursable agreements to the customers quarterly. To clearly delineate roles and responsibilities, JMD clearly defines key areas of authority, responsibilities, and roles within the working capital fund. This allows customers to know who to contact if they have questions. Justice describes this information in a departmentwide funds control order. These delineated roles and responsibilities are posted on the working capital fund web page and are available to both internal and external customers as well as the general public. Key working capital fund duties and responsibilities are spread among multiple individuals and offices. For example: The Assistant Attorney General for Administration (AAG/A) is the fund’s general manager and approves all final decisions and major initiatives affecting customers. The Deputy Assistant Attorney General, Controller (DAAG-Controller), is the financial manager of the working capital fund and is responsible for overseeing budgets. Staff directors ensure service delivery to customers, develop operating plans and rate structure, produce customer billings, and are responsible for day-to-day fund management. Budget staff review and monitor all working capital fund budgets and make recommendations about the funding initiatives and rate changes requested by staff directors. JMD has also clearly defined the responsibilities for the administrative control of working capital funds. JMD has established policies and guidance regarding roles and responsibilities for obligating and expending funds. Specifically, financial management policies state that the AAG/A is the department’s Chief Financial Officer with responsibilities that include direction and oversight of JMD’s financial procedures, practices, operations, systems, and internal controls. In addition, the component heads are responsible for accurate, timely, and complete financial data. These written roles and responsibilities specify how key duties are spread among multiple individuals and can help customers understand who does what. The information is also sufficiently detailed to be useful for internal JMD purposes, such as training and succession planning. To ensure self-sufficiency by recovering the agency’s actual costs, another key working capital fund operating principle, JMD generally charges rates that cover the total cost of providing shared services.bases customer charges on both estimated direct and indirect costs. JMD estimates direct costs based on historical data trends and the actual costs of providing that service. JMD uses an overhead allocation methodology to determine the administrative costs of providing shared services, which It are then spread to each shared services account and collected from customers as part of the overall rate structure. As part of its annual 2-year operating plan process, JMD ensures that rates remain aligned with total costs of operations. JMD staff review the rates by conducting a line-by-line review of each shared services account to determine how costs will change for the coming year. JMD factors forecasted revenue and rate changes based on historical and market data into its shared services rates. JMD’s strategy is to recover total cost at the fund level. Although JMD’s goal is for each shared services account to break even, JMD officials said that some lines of business generate income while others operate at a loss. For example, officials told us that certain services, such as data center services, have recovered more than their actual costs while others, such as the audiovisual and photography services, do not always fully recover costs. Recovering costs at the fund level results in some amount of cross subsidization between various services, which can help ensure that the fund remains solvent. During our focus groups and interviews, customers said that they find the working capital fund’s shared services to be valuable. For example, customers cited the breadth of services offered as well as the experience and knowledge of shared services staff as key strengths of the fund. Further, two customers said that they appreciated the convenience and ease of having these services provided in-house. However, as we will discuss below, CAB members we spoke with were concerned about their limited advisory role, and customers in our focus groups were concerned about how JMD communicates with customers about working capital fund rates and billing information. These customers wanted more opportunities for substantive, two-way communication with the working capital fund staff directors. JMD officials explained that each working capital fund staff director has his or her own way of communicating and interacting with customers and acknowledged that some may be better at providing customer support than others. Customers had different perspectives on whether shared services rates were fair. Two customers told us that they believe the working capital fund negotiates the best rates on their behalf, but one of these customers pointed out that his component lacked information that would ensure the staff of this. Other customers said that without information proving otherwise, they assume that they pay more than what it costs for JMD to provide the service. Customers also noted that they would like more transparency related to the earnings that JMD generates from rates set using the allocation strategy and how it affects actual shared services charges. For example, one customer noted that when determining the rates for e-mail services—whose rates are determined using the allocation strategy—basing rates on total FTE counts is less accurate than basing the rate on the actual number of staff using the service. Generally, customers indicated that they would have a greater sense of comfort with shared services rates if they better understood what they were based on. JMD officials expressed surprise at these concerns. They said that information sharing on rates and rate structures happens regularly on an informal basis and in forums, such as monthly budget officer meetings, and that information about the shared services rates is available in various places depending on the service. For example, they said that some shared services rate information is available on Justice’s intranet pages and that the cover memos accompanying the reimbursable agreements for certain services contain substantial amounts of information about how the rates are set each year. Further, with respect to e-mail rates, they said that they base the charges on the number of active e-mail accounts at the end of the prior fiscal year and that they adopted this allocation strategy in response to customer feedback. Specifically, officials said that in the past, JMD billed e-mail charges based on monthly counts of active e-mail accounts but that CAB members found the variability in billings too difficult to plan for and instead preferred paying a set monthly charge. JMD officials told us that as a result, the working capital fund now charge for e-mail services based on the active number of e-mail accounts at the end of the prior fiscal year, and that they adjust their counts annually. Currently, JMD officials directly communicate the basis for shared services rates only with CAB members at the annual meeting and do not have a formalized mechanism to do so with customers not on the board. In response to a question about how non-CAB customers would receive rate information, JMD officials explained that this information is included in the operating plan. Further, they said that there is a board member responsible for sharing relevant portions of the operating plan with all of the department’s direct reports and other customers. However, JMD officials did not know whether this process is working as intended. It is also unclear whether the CAB member with the responsibility for sharing information with non-CAB members has the necessary knowledge of rate structures and changes as they apply to other components. Transparent and equitable pricing methodologies allow agencies to ensure that rates charged recover agencies’ actual costs and reflect customers’ service usage. If customers understand how rates are determined, they can better anticipate changes to assumptions, identify their effect on costs, and incorporate this information into their budget planning. Absent regular opportunities for a substantive two-way exchange of information, miscommunications such as those described above are unlikely to be resolved. Most customers we spoke with—both CAB members and nonmembers—said that they want more opportunities for a substantive exchange of information with JMD. For example, the majority of CAB members we met with said that because they historically have not received information about changes to shared services rates prior to the annual meeting and feel unprepared to have a substantive discussion about the operating plan and provide input on other management issues. JMD officials responded to this by saying that the annual meeting is not intended to be the only or primary vehicle for engaging their customers. Rather, they view the CAB meeting as the first step in the communications process and see the annual board meeting as an executive-level overview of fund issues for the coming year. They also said that they have always provided opportunities for and encouraged feedback and questions about the materials after the meeting, and that CAB members sometimes provide written and oral responses. JMD officials said that although staff directors occasionally provide some component-specific rate information upon request prior to the annual meeting, they have historically not provided advance copies of the operating plan, proposed rates, or both for three reasons. First, JMD officials explained that it is difficult to provide that information early because of the timing of when the plan is finalized and when the CAB meetings must occur. JMD officials said that they begin the operating plan process late in the fiscal year to ensure that updated data are available to adjust rates for the coming year. At the same time, the meeting needs to occur early in the fiscal year so that CAB members can approve the operating plan, which includes updated shared services rates that will be used to renew reimbursable agreements with customers. Second, officials were concerned about sharing a draft plan that had not been finalized. Lastly, JMD officials were concerned that if they provided the information in advance, CAB members would focus too exclusively on component- specific details and limit the group’s ability to engage in a high-level discussion about the fund. This year, in response to discussions about preliminary observations from focus groups conducted as part of our review, JMD provided CAB members with the operating plan about a week before the annual meeting. A JMD official said that one member acknowledged the usefulness of receiving materials in advance. Further, the JMD official noted that there were fewer questions about the operating plan than in past years, but could not directly attribute this to having sent the operating plan out ahead of time. CAB members also want more substantive two-way communications during the board meetings. Board members told us that the structure of the annual CAB meeting does not allow for this type of exchange. They said that because most of the meeting consists of briefings by JMD, there is limited opportunity for members to ask questions or provide input on fund operations. JMD officials told us that one way they solicit the opinions of CAB members is by asking them to vote on whether certain large investments should be made in the coming year, though they also acknowledged that CAB members have not voted on many issues in recent years. CAB members, however, do not view voting as a means for substantive input since the votes are on very specific issues that do not relate to how the working capital fund is managed. JMD has no formal venue to communicate with non-CAB customers; however, JMD officials told us that customers have a variety of avenues to learn more about the shared services they purchase. For example, JMD staff said that they meet monthly with the executive officers and budget officers, that they attach cover memos to the reimbursable agreements that contain information about the rates and services, and that general information about the shared services and working capital fund is available on Justice’s intranet site. Nevertheless, customers want more opportunities to learn about upcoming service enhancements or changes. JMD officials told us that they have taken steps, such as those mentioned above, to improve communications with customers and that they remain committed to doing so. Although communication is clearly a shared responsibility between the customer and the shared services provider, effectively communicating with customers involves sharing relevant analysis and information as well as providing opportunities for customer input. Agencies that do not communicate effectively with stakeholders miss opportunities for meaningful feedback that could affect the outcome of changes in both rates and program implementation. Customer experiences with getting clear, timely, well-explained bills for working capital fund services are mixed. On the one hand, a customer noted that the library service provides clear, detailed, and complete billing information that is easily accessible online. The customer explained that such information helped components fulfill their bill-paying and audit responsibilities. On the other hand, based on our review of customer bills for other shared services and information gathered during our focus groups, we found that other shared service accounts do not always provide enough information for customers to understand the basis for actual charges or fulfill bill-paying and audit responsibilities. Our review of billing statements from various shared service accounts revealed various levels of detail on billing statements. Some bills had detailed information specifying the basis of every charge; however, one bill included an account service fee of over $20,000 without any explanation. Similarly, during our focus groups, finance staff responsible for paying for the shared services provided noted that billing adjustments sometimes appear without any explanation. Further, they said that JMD does not always provide complete billing information in a timely manner, especially in cases where customers are billed for a different amount than they had expected to pay at the beginning of the year. This inhibits customers’ ability to anticipate their actual charges at the end of the year and undermines their ability to properly account for these costs. Customers also said that the follow-up necessary to obtain more information on these charges is time consuming and resource intensive. Some customers do not receive complete, timely billing information because customers do not always provide JMD with contact information for individuals with responsibility for paying bills. Working capital fund account managers told us that they primarily communicate information such as rates, projected charges, and periodic reports to the points of contact listed in the interagency agreements. Customers can identify up to two customer points of contact in these agreements. However, JMD officials noted that while customers sometimes include program staff as the points of contact, finance staff contacts are not always identified. JMD officials said that they expect the designated points of contact to pass information along to the right people within the components, as appropriate. JMD budget officials acknowledged that communication challenges exist within the components and that the information may not be getting to the appropriate staff. However, they also noted that it is the customer’s responsibility to communicate billing information internally with its finance staff. While this is not an unreasonable expectation, we believe that helping to ensure that the right information gets to the right people at the right time is part of providing good customer service. JMD does not systematically assess customer satisfaction with its services. A JMD official explained that this is the case because officials rely on JMD staff directors to gather customer feedback at a frequency appropriate for their specific services. A working capital fund staff director we spoke with said that JMD solicited customer input on an informal basis and had conducted surveys at the customers’ request. The surveys we reviewed requested customer feedback on measures such as satisfaction and timeliness of services provided as well as whether improvements are needed. Absent a formal mechanism for customers to provide regular, timely feedback about working capital fund services, JMD cannot sufficiently assess whether customer needs are being met or have changed. As we have previously reported, establishing performance measures and goals for shared services is a critical management tool that can help an agency understand whether each of the working capital fund services it provides meet customer needs. JMD has not assessed its shared services to know whether they provide a good value to customers, and therefore has not shared information about the cost-effectiveness of its services with customers. In our focus groups, customers said that although they expect the shared services to offer them economies of scale—and customers assume that they are in fact getting a good value—JMD has not provided data that demonstrate this. Customers explained that having this information is especially important in light of the tight fiscal conditions they expect to face in the foreseeable future. In fact, one customer noted that his staff will be evaluating whether shared services purchased by his component are cost effective. Providing information about the cost-effectiveness of shared services would also help JMD provide better customer service, in keeping with the President’s efforts to streamline and improve service delivery. Further, without conducting analysis to ensure that working capital fund services are a good value, JMD cannot use performance information to improve its own operations. Lastly, data on the cost-effectiveness of shared services can help JMD customers meet the determination requirement of the Economy Act. When ordering services under the Economy Act, customers—as ordering agencies—must determine that the order is in the best interest of the government and cannot be procured as conveniently or inexpensively by contracting directly with the private sector. Although JMD, as the performing agency, is not required to provide information to customers to help them make this determination, it has a business interest in helping other Justice components, which are the bulk of the working capital fund’s customers, comply with these requirements. Performance measures that are aligned with strategic, departmentwide goals can facilitate assessments on whether working capital fund activities are contributing to agency goals. JMD tracks and monitors the performance of its shared services provision on a limited, ad hoc basis. For example, JMD tracks workload measures, such as the number of personnel actions completed, number of transactions processed, and computer processing unit hours available. However, JMD does not have measures to assess how effectively it manages the fund, such as whether managers are responsive to customer issues on rates or billing—two areas with which customers have expressed concern. A fiscal year 1997 financial audit of the fund tasked account managers with outlining major objectives and developing performance measures for the working capital fund. However, JMD officials told us that this had not been accomplished for all the department’s shared services accounts before fiscal year 2007, when Justice rolled the financial, performance, and accountability audits of the working capital fund into Justice’s audit of the Offices, Boards, and Divisions (OBD). Accordingly, JMD officials told us that the working capital fund no longer receives its own audited financial statements; instead Justice develops performance measures for the OBD, under which the working capital fund audits were consolidated. This audit approach does not provide JMD with an opportunity to specifically measure working capital fund-level performance. In its agency comments, Justice clarified that while the working capital fund is part of a broader audited financial statement, performance measures are continually tracked and maintained through the department’s Quarterly Status Report process during budget execution activities. Nevertheless, as we noted earlier, the workload measures that are tracked do not assess whether the fund is effectively managed, which is a key operating principle for working capital funds. Since the fund’s creation in 1975, changes in the work environment, technologies, budget conditions, agency needs, and long-term efficiencies have had an impact on how JMD provides shared services to its customers. Therefore, opportunities exist for JMD to evaluate whether the working capital fund provides shared services efficiently or the services are aligned with current departmental needs. For example, customers told us that while they need most services provided by the working capital fund, JMD has required them to use some services despite customers’ ability to provide these services themselves. Specifically, one customer said that although her component had received appropriations to develop security training, it was required to purchase the same training from the working capital fund a few years later. She questioned whether components could have provided this training more cheaply and effectively than JMD. Another customer said that although his component owned and preferred its own audio equipment, it was required to use speakers and microphones provided by the working capital fund whenever events were held in the main Justice building. Both customers stated that JMD should assess whether the working capital fund should continue to provide those services for all components. If available, specific working capital fund-level performance information would allow JMD to regularly compare actual performance with planned or expected performance. Making adjustments to the fund management and services, as appropriate, in a corresponding management review process could help JMD achieve the efficiencies that working capital funds were designed to produce, potentially freeing up resources that could be realigned for other departmental initiatives. Further, such a review could also allow JMD to better reassess functions to ensure that the working capital fund continues to provide the critical underlying infrastructure and support that allow other Justice components to perform their primary functions. Performance measures that are aligned with strategic goals can be used to evaluate whether and, if so, how working capital fund activities are contributing to the achievement of agency goals and departmentwide crosscutting initiatives. Justice has the authority to capture excess unobligated balances into the working capital fund and AFF to fund various departmental priorities. These balances are available until expended. Specifically, the AFF balance—known as the Super Surplus—may be used for any authorized law enforcement purpose, while the working capital fund’s Unobligated Balance Transfers account—known as UBT—may be used for capital investments or administrative purposes. According to the AAG/A, who has responsibility for managing these authorities as Justice’s Chief Financial Officer, the working capital fund and AFF’s authority to retain and use transferred excess unobligated balances is a tremendous benefit for the department. He considers these authorities to be part of a suite of financial tools available to manage projects to meet Justice priorities. Excess unobligated balances from accounts across the department can be transferred into the working capital fund’s UBT. This account consists of moneys from expired Justice appropriation balances that are not needed to cover obligations or other adjustments and are about to be canceled. Excess unobligated balances in AFF can be transferred into the Super Surplus account. The Super Surplus amounts include prior- year declared excess unobligated balances. The Assets Forfeiture Management Staff, in conjunction with JMD budget staff, determine the amounts needed to (1) maintain AFF solvency by covering anticipated forfeiture-related expenses, (2) ensure a reserve for pending equitable forfeited assets and third-party payments with partners and victims (referred to as major sharing reserves), and (3) retain funding to cover rescissions. Any remaining funds can be declared as excess unobligated balances and used to increase the Super Surplus balance. Justice leadership uses a four-step process to make final decisions on how to use the working capital fund’s UBT and AFF’s Super Surplus. 1. When excess unobligated balances are available, Justice components submit requests for funds to JMD. These requests must provide sufficient justification to allow senior Justice officials to make informed decisions about the use of these funds. 2. JMD budget staff consider each funding request in light of the priority resource needs of the department and the authorized purposes for which UBT and Super Surplus balances are available. JMD budget staff present their recommendations to the AAG/A for review and approval. 3. The AAG/A, with input from the Attorney General and other departmental leaders, makes the final decision on how to allocate these balances. 4. Before using the excess unobligated balances, JMD notifies OMB and the House and Senate Appropriations Committees’ Commerce, Justice, and Science Subcommittees on how much they will use from the UBT and Super Surplus and for what purpose. While Justice is only required to notify Congress and OMB of its uses, it generally waits for approval before using the funds. In the past, Justice has used the working capital fund UBT to fund general administrative acquisitions, such as improving Justice’s financial management system. However, JMD budget officials told us that the UBT has not been available for departmental priorities in recent years. Since fiscal year 1995, Justice has used the UBT for rescissions enacted in law and drawn from the working capital fund. When rescinded amounts were equal to or greater than the existing UBT balance, those funds were unavailable for departmental priorities (see table 2). When AFF Super Surplus balances were available, Justice allocated funding for various law enforcement purposes as determined by the Attorney General’s statutory discretion, such as programs targeting crimes against children. Because rescissions from AFF have been greater than the existing Super Surplus balance since fiscal year 2008, the Super Surplus has been unavailable for departmental priorities in recent years. Further, an amount equal to the prior fiscal year’s rescission has been designated for return to the AFF Super Surplus the following fiscal year (see table 3). Working capital funds provide agencies with an opportunity to operate more efficiently by consolidating and providing services. They also create incentives for customers and managers to exercise cost control and economic restraint. Given the fiscal pressures facing the federal government, consolidating operations could potentially achieve cost savings and help agencies provide more efficient and effective services. Agencies can maximize the potential of these opportunities by following four key working capital fund operating principles. Specifically, these principles are to clearly delineate roles and responsibilities, ensure self- sufficiency by recovering the agency’s actual costs, measure performance, and build in flexibility to obtain customer input and meet customer needs. JMD effectively tracks working capital fund moneys in accordance with fiscal law, clearly delineates roles and responsibilities within the fund, and ensures self-sufficiency by recovering total shared services costs. Further, customers noted positive benefits from shared services, including the breadth of services offered, the experience and knowledge of shared services staff, and the convenience and ease of having these services provided in-house. Customers do not always understand the basis for the rates they pay and lack assurances that fund costs are equitably distributed among customers. Although JMD established the CAB to improve customer satisfaction with the working capital fund, board members do not find the annual meeting—JMD’s primary vehicle for engaging board members about shared services and their accompanying rates—a useful forum in which to understand and provide advice on fund management and operations. Further, JMD does not have a systematic way to communicate with non-CAB customers, which results in uneven flow and availability of information among working capital fund customers, especially regarding the structures of some shared services rates. JMD officials described various ways that they push information on rates and services out to their customers but ultimately agreed that some customers may have better access to this information than others, and said that they remained committed to continuing to improve communication with customers. Providing ample opportunity for customers to provide input on services and voice their concerns about the fund is a key principle for managing working capital funds. Further, transparent and equitable pricing methodologies allow agencies to ensure that shared services rates charged recover agencies’ actual costs and reflect customers’ service usage. If customers understand how rates are determined, they can better anticipate changes to assumptions, identify their effect on costs, and incorporate this information into their budget planning. Customer experiences with getting clear, timely, well-explained bills for working capital fund services are mixed, and our review of customer bills for shared services found that some services do not always provide enough information for the customers to understand the basis for the charges contained in the bills. As a result, customers’ ability to anticipate their actual charges at the end of the year and to properly account for these costs was inhibited. Although customers do not always provide JMD with points of contact for billing information, we believe that helping to ensure that the right information gets to the right people at the right time is part of providing good customer service. Although JMD tracks and monitors limited performance information for some shared services, it does not have measures to assess how effectively it manages the fund, such as whether managers are responsive to customer inquiries or billing error rates—two areas with which customers have expressed concern. By establishing performance measures and goals for working capital fund operations that align with Justice’s strategic goals, and putting a management review process in place to track fund performance, JMD would have the necessary tools to know whether the fund is achieving the efficiencies that intragovernmental revolving funds were designed to produce. Absent a systematic way to measure customer satisfaction with shared services as well as fund-level performance, JMD is missing an opportunity to identify potential improvements and efficiencies to the services it provides. Further, by better understanding the fund’s effectiveness, JMD could potentially free up resources that could be realigned for other departmental priorities. To improve the management of the Justice working capital fund, we recommend that the Attorney General direct the AAG/A to take the following three actions: 1. Improve opportunities for two-way substantive communication with shared services customers. This could include developing a means to discuss customer concerns about working capital fund rates and services; organizing breakout sessions on specific lines of business, to be attended by appropriate customer program and finance staff; restructuring the annual CAB meetings to allow further opportunities for two-way communication; conducting a periodic survey or listening session with customers on such topics as their level of satisfaction or potential changes to service needs; or a combination of these. 2. Help ensure that information on the basis of rates for each shared services and sufficiently detailed billing information reaches the appropriate customer staff, especially those in the finance and program offices. This could include posting relevant portions of the operating plan with information on the basis of rate structures on Justice’s intranet, requiring both a program office and finance point of contact to be provided in each reimbursable agreement, or organizing periodic dedicated sessions for both program staff and finance customer staff to discuss issues relevant to them. 3. Develop performance measures to monitor whether all shared services are provided in an efficient and effective manner. These measures should support goals that align with Justice priorities and, as the departmental needs change over time, provide JMD additional assurance that the level and types of working capital fund services provided support current departmental goals. We provided a draft of this report to the Attorney General for official review and comment. In his letter, which is reprinted in appendix III, the Assistant Attorney General for Administration generally agreed with our findings and recommendations. Specifically, he noted that JMD will continue to explore ways to address the issues we identified. For the third recommendation, he noted that while it is possible to enhance oversight of the working capital fund by formulating and tracking additional performance measures, such measures would not be necessary to assure Justice that fund services support the department’s needs. While we agree that fund services provide critical support to Justice’s mission, we continue to believe that as the departmental needs change over time, JMD could provide additional assurance that the level and types of working capital fund services provided support current agency goals. Further, we have revised the third recommendation to reflect this. Justice provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Attorney General and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or fantoned@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. The following tables show the interactive data from figure 1. Table 4 shows working capital fund amounts for services and support for “other services.” Table 5 shows working capital fund amounts for customers under “all other customers.” Principle Clearly delineate roles and responsibilities Appropriate delineation of roles and responsibilities promotes a clear understanding of who will be held accountable for specific tasks or duties, such as authorizing and reviewing transactions, implementing controls over working capital fund management, and helping to ensure that related responsibilities are coordinated. In addition, this reduces the risk of mismanaged funds and tasks or functions “falling through the cracks.” Moreover, it helps customers know who to contact if they have questions. Examples of evidence supporting principle Written roles and responsibilities specify how key duties and responsibilities are divided across multiple individuals/offices and are subject to a process of checks and balances. This should include separating responsibilities for authorizing transactions, processing and recording them, and reviewing the transactions. Written description of all working capital fund roles and responsibilities is available in an accessible format, such as a fund manual. Discussions with providers and clients confirm a clear understanding. A routine review process exists to ensure proper execution of transactions and events. Ensure self-sufficiency by recovering the agency’s actual costs Transparent and equitable pricing methodologies allow agencies to ensure that rates charged recover agencies’ actual costs and reflect customers’ service usage. If customers understand how rates are determined or changed, including the assumptions used, customers can better anticipate potential changes to those assumptions, identify their effect on costs, and incorporate that information into budget plans. A management review process can help to ensure that the methodology is applied consistently over time and provides a forum to inform customers of decisions and discuss as needed. Published price sheets for services are readily available. Documentation of pricing formulas supports equitable distribution of costs. Pricing methodology and accompanying process ensure that in aggregate, charges recover the actual costs of operations. Management review process allows fund managers to receive and incorporate feedback from customers. Discussions with customers confirm an understanding of the charges and that they are viewed as transparent and equitable. Performance indicators and metrics for working capital fund management (not just for the services provided) are documented. Indicators or metrics to measure outputs and outcomes are aligned with strategic goals and working capital fund priorities. Principle management review process comparing expected to actual performance allows agencies to review progress toward goals and potentially identify ways to improve performance. Examples of evidence supporting principle Working capital fund managers regularly compare actual performance with planned or expected results and make improvements as appropriate. In addition, performance results are periodically benchmarked against standards or “best in class” in a specific activity. Build in flexibility to obtain customer input and meet customer needs Opportunities for customers to provide input about working capital fund services, or voice concerns about needs, in a timely manner enable agencies to regularly assess whether customer needs are being met or have changed. This also enables agencies to prioritize customer demands and use resources most effectively, enabling them to adjust working capital fund capacity up or down as business rises or falls. An established forum, routine meetings, surveys, or a combination of these solicit information on customer needs and satisfaction with working capital fund performance. Established communication channels regularly and actively seek information on changes in customer demand and assess the resources needed to accommodate those changes. Established management review process allows for trade-off decisions to prioritize and shift limited resources needed to accommodate changes in demand across the organization. In addition to the contact named above, Jacqueline M. Nowicki, Assistant Director, and Shirley Hwang, Analyst-in-Charge, managed this assignment. Melissa L. King, Catherine H. Myrick, and Keith C. O’Brien made major contributions to this report. Cynthia Saunders provided methodological assistance, Felicia Lopez provided legal assistance, and Donna Miller developed the report’s graphics. Other individuals providing key advice included Sandra Burrell, Samantha Carter, and Jack Warner.
The Department of Justice’s (Justice) working capital fund is intended to provide increased efficiencies in how the department funds and offers shared services—such as payroll, telecommunications, financial services, mail, and publications—valued at over $1 billion annually. Ensuring that the working capital fund is managed as efficiently as possible could allow Justice to use saved resources for other departmental priorities. GAO was asked to determine how Justice (1) manages its working capital fund to promote compliance with applicable fiscal laws and key operating principles, (2) communicates shared services rates with customers, (3) measures performance to evaluate whether fund activities are contributing to agency goals, and (4) ensures that its excess unobligated balances are used in accordance with legal authorities and managed so that Justice can make well-informed funding decisions. GAO reviewed statutory authorities, analyzed Justice policies, interviewed budget and finance officials, and conducted focus groups with some shared services customers. The Justice Management Division (JMD), the component responsible for managing the working capital fund, effectively tracks fund functions to ensure adherence to applicable fiscal laws and sound management practices. For example, JMD has well-established policies and procedures for tracking and monitoring the four working capital fund functions so that the fund adheres to authorized purposes. Further, JMD structures its reimbursable agreements with customers to facilitate adherence to the Economy Act—the statutory authority underlying most of JMD’s customer orders. JMD also clearly delineates roles and responsibilities, which allows customers to know who to contact with questions and clearly assigns responsibility for obligating and expending funds. Justice also ensures the fund’s self-sufficiency by recovering total costs for the provided services. These actions are consistent with two of the four key operating principles for working capital funds.Customers noted positive benefits from Justice’s shared services but seek more information on rate structures and want assurances that fund costs are equitably distributed. For example, customers said they valued the breadth of services offered as well as the experience of fund staff but wanted to better understand the basis for shared services rates and more opportunities to discuss billing concerns and service changes with JMD. Officials expressed surprise at these concerns. They noted that informal information sharing on rates and rate structures happens regularly, but explained that each staff director has his/her own way of communicating with customers and acknowledged that some may be better at providing customer support than others. JMD does not systematically measure important aspects of shared service provision and working capital fund management. For example, JMD tracks workload measures such as the number of transactions processed, but does not assess customer satisfaction with shared services. It also does not have measures to assess how effectively it manages the fund, such as whether managers are responsive to concerns about shared service rates or billing issues—areas with which customers have expressed concern. Absent a formal mechanism for customers to provide timely and regular feedback, JMD cannot sufficiently assess whether customer needs are met or have changed. JMD also has not assessed its shared services rates to know whether they provide a good value to customers. If available, specific working capital fund-level performance information would allow JMD to regularly compare actual performance with planned or expected performance. Further, a corresponding management review process could help JMD achieve the efficiencies that working capital funds were designed to produce, potentially freeing up resources that could be realigned for other departmental initiatives. Lastly, performance measures aligned with strategic goals can be used to evaluate whether and how working capital fund activities contribute to departmentwide goals and crosscutting initiatives. Justice has processes to ensure that excess unobligated balances are used in accordance with legal authorities. It also has an established process to make well-informed decisions on how to spend available funds. However, JMD budget officials told us that these balances were unavailable for departmentwide priorities in recent years because they have been used to meet rescissions. GAO is making three recommendations to improve the management of the working capital fund, including providing opportunities for two-way substantive communications with customers and developing performance measures for the fund. Justice generally agreed with our findings and recommendations and noted that it will continue to explore ways to address the issues we identified.
The United States has assisted the Mexican government in its counternarcotics efforts since 1973, providing about $350 million in aid. Since the late 1980s, U.S. assistance has centered on developing and supporting Mexican law enforcement efforts to stop the flow of cocaine from Colombia, the world's largest supplier, into Mexico and onward to the United States. According to U.S. estimates, Mexican narcotics-trafficking organizations facilitate the movement of between 50 and 60 percent of the almost 300 metric tons of cocaine consumed in the United States annually. In the early 1990s, the predominant means of moving cocaine from Colombia to Mexico was by aircraft. However, a shift to the maritime movement of drugs has occurred over the past few years. In 1998, only two flights were identified as carrying cocaine into Mexico. According to U.S. law enforcement officials, most drugs enter Mexico via ship or small boat through the Yucatan peninsula and Baja California regions. Additionally, there has been an increase in the overland movement of drugs into Mexico, primarily through Guatemala. Since 1996, most U.S. assistance has been provided by the Department of Defense to the Mexican military, which has been given a much larger counternarcotics and law enforcement role. On the other hand, the Department of State’s counternarcotics assistance program has been concentrating on supporting the development of specialized law enforcement units, encouraging institutional development and modernizing and strengthening training programs. Table 1 provides additional information on U.S. counternarcotics assistance to the government of Mexico since 1997. The Foreign Assistance Act of 1961, as amended, requires the President to certify annually that major drug-producing and -transit countries are fully cooperating with the United States in their counternarcotics efforts. As part of this process, the United States established specific objectives for evaluating the performance of these countries. According to State Department officials, as part of the March 1999 certification decision, the United States will essentially use the same objectives it used for evaluating Mexico's counternarcotics cooperation in March 1998. These include (1) reducing the flow of drugs into the United States, (2) disrupting and dismantling narcotrafficking organizations, (3) bringing fugitives to justice, (4) making progress in criminal justice and anticorruption reform, (5) improving money-laundering and chemical diversion control, and (6) continuing improvement in cooperation with the United States. Although there have been some difficulties, the United States and Mexico have undertaken some steps to enhance cooperation in combating illegal drug activities. Mexico has also taken actions to enhance its counternarcotics efforts and improve law enforcement capabilities. There have been some positive results from the new initiatives, such as the arrest of two major drug traffickers and the implementation of the currency and suspicious transaction reporting requirements. Overall, the results show: drugs are still flowing across the border at about the same rate as 1997, there have been no significant increases in drug eradication and no major drug trafficker has been extradited to the United States, money-laundering prosecutions and convictions have been minimal, corruption remains a major impediment to Mexican counternarcotics most drug trafficking leaders continue to operate with impunity. The United States and Mexico have cooperated in the development of a binational counternarcotics drug strategy, which was released in February 1998. This strategy contains 16 general objectives, such as reducing the production and distribution of illegal drugs in both countries and focusing law enforcement efforts against criminal organizations. Since the issuance of the binational strategy, a number of joint working groups, made up of U.S. and Mexican government officials, have been formed to address matters of mutual concern. A primary function of several of these working groups was to develop quantifiable performance measures and milestones for assessing progress toward achieving the objectives of the strategy. The performance measures were released during President Clinton’s February 15, 1999, visit to Mexico. A binational law enforcement plenary group was also established to facilitate the exchange of antidrug information. Despite these cooperative efforts, information exchange remains a concern by both governments because some intelligence and law enforcement information is not shared in a timely manner, which impedes drug trafficking operations. Operation Casablanca created tensions in relations between the two countries because information on this undercover operation was not shared with Mexican officials. In the aftermath of Operation Casablanca, the United States and Mexico have taken action to strengthen communications between the two countries. An agreement reached by the U.S. and Mexican Attorneys General (commonly referred to as the “Brownsville Letter”) calls for (1) greater information-sharing on law enforcement activities; (2) providing advance notice of major or sensitive cross-border activities of law enforcement agencies; and (3) developing training programs addressing the legal systems and investigative techniques of both countries. Data for 1998 show that Mexico has, for the most part, not significantly increased its eradication of crops and seizures of illegal drugs since 1995. While Mexico did increase its eradication of opium poppy, eradication of other crops and seizures have remained relatively constant. Cocaine seizures in 1998 were about one-third lower than in 1997. However, the large seizure amount in 1997 was attributable, in part, to two large cocaine seizures that year. Last year I testified that the government of Mexico took a number of executive and legislative actions, including initiating several anticorruption measures, instituting extradition efforts, and passing various laws to address illegal drug-related activities. I also said that it was too early to determine their impact, and challenges to their full implementation remained. While some progress has been made, implementation challenges remain. I testified last year that corruption was pervasive and entrenched within the justice system—that has not changed. According to U.S. and Mexican law enforcement officials, corruption remains one of the major impediments affecting Mexican counternarcotics efforts. These officials also stated that most drug-trafficking organizations operate with impunity in parts of Mexico. Mexican traffickers use their vast wealth to corrupt public officials and law enforcement and military personnel, as well as to inject their influence into the political sector. For example, it is estimated that the Arelleno-Felix organization pays $1 million per week to Mexican federal, state, and local officials to ensure the continued flow of drugs to gateway cities along Mexico’s northwest border with the United States. A recent report by the Attorney General's Office of Mexico recognized that one basic problem in the fight against drug trafficking has been "internal corruption in the ranks of the federal judicial police and other public servants of the Attorney General's Office." As we reported last year, the President of Mexico publicly acknowledged that corruption is deeply rooted in the nation's institutions and general social conduct, and he began to initiate reforms within the law enforcement community. These include (1) reorganizing the Attorney General’s office and replacing the previously discredited drug control office with the Special Prosecutor’s Office for Crimes Against Health; (2) firing or arresting corrupt or incompetent law enforcement officials; (3) establishing a screening process to filter out corrupt law enforcement personnel; and (4) establishing special units within the military, the Attorney General’s Office, and the Secretariat of Hacienda—the Organized Crime Unit, the Bilateral Task Forces and Hacienda’s Financial Analysis Unit—to investigate and dismantle drug-trafficking organizations in Mexico and along the U.S.-Mexico border and investigate money-laundering activities. Additionally, the President expanded the counternarcotics role of the military. The Organized Crime Unit and the Bilateral Task Force were involved in several counternarcotics operations in 1998, for example, the capture of two major narcotics traffickers and the recent seizure of properties belonging to alleged drug traffickers in the Cancun area, as well as the seizure of money, drugs, and precursor chemicals at the Mexico City Airport. However, many issues still need to be resolved—some of them the same as we reported last year. For example, there continues to be a shortage of Bilateral Task Force field agents as well as inadequate Mexican government funding for equipment, fuel, and salary supplements for the agents. (Last year the Drug Enforcement Administration provided almost $460,000 to the Bilateral Task Forces to overcome this lack of support); the Organized Crime Unit remains significantly short of fully screened there have been instances of inadequate coordination and communications between Mexican law enforcement agencies, and Mexico continues to face difficulty building competent law enforcement institutions because of low salaries and the lack of job security. Additionally, increasing the involvement of the Mexican military in law enforcement activities and establishing screening procedures have not been a panacea for the corruption issues facing Mexico. A number of senior Mexican military officers have been charged with cooperating with narcotics traffickers. One of the most notable of these was General Jesus Gutierrez Rebollo, former head of the National Institute for Combat Against Drugs—the Mexican equivalent of the U.S. Drug Enforcement Administration. In addition, as we reported last year, some law enforcement officials who had passed the screening process had been arrested for illegal drug-related activities. In September 1998, four of the Organized Crime Unit's top officials, including the Unit's deputy director, were re-screened and failed. Two are still employed by the Organized Crime Unit, one resigned, and one was transferred overseas. Since my testimony last year, no major Mexican national drug trafficker has been surrendered to the United States. In November 1998, the government of Mexico did surrender to the United States a Mexican national charged with murdering a U.S. Border Patrol officer while having about 40 pounds of marijuana in his possession. However, U.S. and Mexican officials agree that this extradition involved a low-level trafficker who, unlike other traffickers, failed to use legal mechanisms to slow or stop the extradition process. According to the Justice Department, Mexico has approved the extradition of eight other Mexican nationals charged with drug-related offenses. They are currently serving criminal sentences, pursuing appeals, or are being prosecuted in Mexico. U.S. and Mexican officials expressed concern that two recent judicial decisions halting the extradition of two major traffickers represented a setback for efforts to extradite Mexican nationals. The U.S. officials stated that intermediate courts had held that Mexican nationals cannot be extradited if they are subject to prosecution in Mexico. U.S. officials believe that these judicial decisions could have serious consequences for the bilateral extradition relationship between the two countries In November 1997, the United States and Mexico signed a temporary extradition protocol. The protocol would allow suspected criminals who are serving sentences in one country and are charged in the other to be temporarily surrendered for trial while evidence is current and witnesses are available. To become effective, the protocol required approval by the congresses of both countries. The U.S. Senate approved the protocol in October 1998; however, the protocol has not yet been approved by the Mexican congress. According to U.S. and Mexican officials, the 1996 organized crime law has not been fully implemented, and its impact is not likely to be fully evident for some time. According to U.S. law enforcement officials, Mexico has made some use of the plea bargaining and wiretapping provisions of the law. However, U.S. and Mexican law enforcement officials pointed to judicial corruption as slowing the use of the wiretapping provision and have suggested the creation of a corps of screened judges, who would be provided with extra money, security, and special arrangements to hear cases without fear of reprisals. Additionally, results of Mexico's newly created witness protection program are not encouraging—two of the six witnesses in the program have been killed. U.S. and Mexican officials continue to believe that more efforts need to be directed toward the development of a cadre of competent and trustworthy judges and prosecutors that law enforcement organizations can rely on to effectively carry out the provisions of the organized crime law. U.S. agencies continue to provide assistance in this area. Mexico has begun to successfully implement the currency and suspicious transaction reporting requirements, resulting in what U.S. law enforcement officials described as a flood of currency and suspicious transaction reporting. Mexican officials also indicated that Operation Casablanca resulted in a greater effort by Mexican banks to adhere to anti- money-laundering regulations. However, U.S. officials remain concerned that there is no requirement to obtain and retain account holders’ information for transactions below the $10,000 level. No data is available on how serious this problem is and there is no reliable data on the magnitude of the money-laundering problem. Between May 1996 and November 1998, the Mexican government issued 35 indictments and/or complaints on money-laundering charges; however, only one case has resulted in a successful prosecution. The remaining 34 cases are still under investigation or have been dismissed. Last year we reported that the new chemical control law was not fully implemented due to the lack of an administrative infrastructure for enforcing its provisions. This is still the case. Mexico is currently in the process of developing this infrastructure as well as the guidelines necessary to implement the law. However, U.S. officials remain concerned that the law does not cover the importation of finished products, such as over-the-counter drugs that could be used to make methamphetamines. Over the past year, Mexico has announced a new drug strategy and instituted a number of new counternarcotics initiatives. The government of Mexico also reported that it has channeled significant funds—$754 million during 1998—into its ongoing campaign against drug trafficking. Mexico also indicated that it will earmark about $770 million for its 1999 counternarcotics campaign. During 1998 and 1999, the government of Mexico announced a number of new initiatives. For example, a federal law for the administration of seized, forfeited and abandoned goods that will allow authorities to use proceeds and instruments seized from crime organizations for the benefit of law enforcement is being considered, a federal law that will establish expedited procedures to terminate corrupt law enforcement personnel is also being considered, and the government of Mexico recently announced the creation of a new national police force. In addition, the government of Mexico has initiated an operation to seal three strategic points in Mexico. The purpose of the program is to prevent the entry of narcotics and diversion of precursor chemicals in the Yucatan peninsula, Mexico's southern border, and the Gulf of California. Furthermore, the Mexican government recently announced a counternarcotics strategy to crack down on drug traffickers. Mexico indicated that it plans to spend between $400 million and $500 million over the next 3 years to buy new planes, ships, radar and other military and law enforcement equipment. In addition to the new spending, Mexico reported that its new antidrug efforts will focus on improving coordination among law enforcement agencies and combating corruption more efficiently. A senior Mexican government official termed this new initiative a “total war against the scourge of drugs.” Last year we noted that while U.S.-provided assistance had enhanced the counternarcotics capabilities of Mexican law enforcement and military organizations, the effectiveness and usefulness of some assistance were limited. For example, two Knox-class frigates purchased by the government of Mexico lacked the equipment needed to ensure the safety of the crew, thus making the ships inoperative. We also reported that the 73 UH-1H helicopters provided to Mexico to improve the interdiction capability of Mexican army units were of little utility above 5,000 feet, where significant drug-related activities and cultivation occur. In addition, we noted that four C-26 aircraft were provided to Mexico without the capability to perform intended surveillance missions and without planning for payment for the operation and maintenance of the aircraft. Mr. Chairman, let me bring you up to date on these issues. The two Knox-class frigates have been repaired and are in operation. According to U.S. embassy officials, the government of Mexico is considering the purchase of two additional frigates. However, other problems remain. For example, in late March 1998, the U.S. Army grounded its entire UH-1H fleet until gears within the UH-1H engines could be examined and repairs could be made. The government of Mexico followed suit and grounded all of the U.S.-provided UH-1H helicopters until they could be examined. The helicopters were subsequently tested, with 13 of the Attorney General’s 27 helicopters and 40 of the military’s 72 helicopters receiving passing grades. According to Department of Defense officials, the helicopters that passed the engine tests could be flown on a restricted basis. U.S. embassy officials told us that the Office of the Attorney General has been flying its UH-1H helicopters on a restricted basis, but the Mexican military has decided to keep its entire fleet grounded until all are repaired. Finally, the four C-26 aircraft still are not being used for counternarcotics operations. This concludes my prepared remarks. I would be happy to respond to any questions you may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the counternarcotics efforts of the United States and Mexico, focusing on: (1) Mexico's efforts in addressing the drug threat; and (2) the status of U.S. counternarcotics assistance provided to Mexico. GAO noted that: (1) while some high-profile law enforcement actions were taken in 1998, major challenges remain; (2) new laws passed to address organized crime, money laundering, and the diversion of chemicals used in narcotics manufacturing have not been fully implemented; (3) moreover, during 1998, opium poppy eradication and drug seizures remained at about the same level as in 1995; (4) in addition, no major Mexican drug trafficker was surrendered to the United States on drug charges; (5) Mexican government counternarcotics activities in 1998 have not been without positive results; (6) one of its major accomplishments was the arrest of two major drug traffickers commonly known as the Kings of Methamphetamine; (7) although all drug-related charges against the two have been dropped, both are still in jail and being held on extradition warrants; (8) the Mexican foreign ministry has approved the extradition of one of the traffickers to the United States, but he has appealed the decision; (9) in addition, during 1998 the Organized Crime Unit of the Attorney General's Office conducted a major operation in the Cancun area where four hotels and other large properties allegedly belonging to drug traffickers associated with the Juarez trafficking organization were seized; (10) Mexico also implemented its currency and suspicious reporting requirements; (11) the Mexican government has proposed or undertaken a number of new initiatives; (12) it has initiated an effort to prevent illegal drugs from entering Mexico, announced a new counternarcotics strategy and the creation of a national police force; (13) one of the major impediments to U.S. and Mexican counternarcotics objectives is Mexican government corruption; (14) recognizing the impact of corruption on law enforcement agencies, the President of Mexico: (a) expanded the role of the military in counternarcotics activities; and (b) introduced a screening process for personnel working in certain law enforcement activities; (15) since these initiatives, a number of senior military and screened personnel were found to be either involved in or suspected of drug-related activities; (16) since 1997, the Departments of State and Defense have provided Mexico with over $92 million worth of equipment, training, and aviation spare parts for counternarcotics purposes; and (17) the major assistance included UH-1H helicopters, C-26 aircraft, and two Knox-class frigates purchased by the government of Mexico through the foreign military sales program.
The Social Security Administration administers two main programs that provide benefits to individuals with disabilities: SSI and DI. Adults are generally considered disabled if (1) they cannot perform work that they did before; (2) they cannot adjust to other work because of their medical condition(s); and (3) their disability has lasted, or is expected to last, at least 1 year or is expected to result in death. SSI is a means-tested income assistance program that provides monthly cash benefits to individuals who are disabled, blind, or aged and meet, among other things, the program’s assets and income restrictions. In fiscal year 2015, SSA expects to pay an estimated $60 billion in SSI benefits to about 8.5 million recipients. SSA’s primary disability program, the DI program, provides monthly cash benefits to adults not yet at full retirement age when the individual is disabled and has worked long enough to qualify for disability benefits. In fiscal year 2015, SSA expects to pay an estimated $147 billion in DI benefits to about 11 million workers with disabilities and their spouses and dependents. Some disability recipients receive both SSI and DI benefits because of their work history and the low level of their income and resources. SSA expects costs for these programs to increase in the coming years. SSA’s disability determination process is complex and involves offices at the federal and state level (see fig.1). The process begins at an SSA field office, where a staff member determines whether a claimant meets the programs’ nonmedical eligibility criteria. Claims from individuals meeting these criteria are then evaluated by state DDS staff, who review medical and other evidence and make the initial disability decision. SSA funds the DDSs, which are run by the states, to process disability claims in accordance with SSA regulations, policies, and guidelines. Some DDSs may be independent state agencies, while others may be part of other state agencies with broader missions, such as departments of human services. If an initial claim is denied, claimants have several opportunities for appeal within SSA, starting with a reconsideration; then a hearing before an SSA administrative law judge (ALJ); and finally at the Appeals Council, which is SSA’s final administrative appeals level. If the claimant is determined to be eligible for SSI or DI, SSA will calculate the benefit amount and begin to pay benefits. A claimant may also be entitled to past-due benefits for the months in which his or her SSI or DI cash payments were pending during the disability decision-making process. Claimants may choose to appoint a representative to assist them through the disability application process and in their interactions with SSA. Appointed representatives can be attorneys or nonattorneys, and, as long as they meet SSA’s requirements for representatives, their experience can range from being a family member appointed as a representative on a one-time basis to a professional representative working at a for-profit or nonprofit organization. A representative may act on a claimant’s behalf in a number of ways, including helping the claimant complete the disability application, obtaining and submitting evidence in support of a claim, and supporting the claimant during the hearings and appeals process. To appoint a representative, a claimant must sign a written notice appointing the individual to be his or her representative in dealings with SSA and file the notice with SSA. Representatives can file this notice using a standard form, which contains the name and address of the representative. The standard form also indicates whether and how the representative would like to be paid—by the claimant, directly by SSA out of a claimant’s past-due benefits (known as a direct payment), or by a third party. Representatives have commonly been involved at SSA’s hearings and Appeals Council levels, but evidence suggests that representatives have become increasingly involved at the initial stage of the disability determination process. SSA data compiled for this report show that the proportions of SSI and DI claims with a representative at the initial level increased between 2004 and 2013. From 2004 to 2013, initial SSI claims with a representative increased dramatically, from almost 11,000 claims in 2004 (less than 1 percent of all initial SSI claims) to about 278,000 claims in 2013 (about 14 percent of claims). Initial DI claims with a representative also increased over the same time period, from almost 100,000 claims (about 8 percent of claims) to more than 413,000 claims (about 20 percent of claims). (See fig. 2.) In 2013, two-thirds of the representatives associated with initial claims were attorneys and one-third were nonattorneys. These trends may, in part, reflect legislative actions that expanded payment options for representatives in the disability determination process. For example, the Social Security Protection Act of 2004 temporarily allowed attorney representatives to receive direct payments from SSA, out of claimants’ past-due benefits, for SSI claims, and also required a demonstration project under which SSA’s direct payment system applied to qualified nonattorney representatives. These policy changes were made permanent in 2010. States and counties have engaged in SSI/DI advocacy efforts for years because it can benefit individuals with disabilities as well as the state and counties. When states are successful in helping eligible individuals on state- or county-administered assistance programs navigate the complex disability application process and obtain federal disability benefits, the individuals and their families not only may generally receive a higher monthly income but can also potentially receive benefits on a long-term basis. At the same time, successful SSI/DI advocacy efforts allow states to reduce benefit costs or reinvest cost savings into expanding services or serving other individuals. The financial incentives for states to pursue SSI/DI advocacy increased in two ways with the creation of the TANF program in 1996 and subsequent changes to TANF requirements. As some researchers noted, under the former program, Aid to Families with Dependent Children, states received less than half of any savings achieved through transferring individuals to SSI. Under TANF, however, states retain the savings from federal and state funds that would have been used to support those individuals and can use those funds for other allowable benefits or services. At the same time, the new work participation requirements of the TANF program required a percentage of each state’s caseload to participate in employment-related activities. States that do not meet required work participation rates are at risk of having their annual TANF block grants reduced. Therefore, the work requirements provided incentives for states to remove certain families from the calculation of the work participation rate, including individuals with disabilities who have significant barriers to work. States have taken different approaches to SSI/DI advocacy. Some states designate state employees to provide SSI/DI advocacy services, while others contract with for-profit or nonprofit organizations or legal aid groups. Some states do not have SSI/DI advocacy programs at all. Furthermore, some SSI/DI advocacy efforts are at the county or local level. In addition to states and counties, other third parties—such as hospitals and private insurance companies—also contract for SSI/DI advocacy services. For example, hospitals contract with companies to obtain reimbursement for medical care provided to patients who do not have health insurance by helping patients establish eligibility for various federal, state, and county programs, such as SSI and Medicaid. Insurance companies may also contract with companies to help individuals receiving long-term disability benefits apply for federal disability benefits, in part because federal disability benefits can reduce the amount the insurance company must pay. States—and county and local governments, in some cases—administer a number of assistance programs for low-income individuals and families, some of whom have disabilities that may qualify them for federal disability programs. In many instances, these low-income individuals can qualify for SSI due to their income and assets, among other factors. Some may also qualify for DI benefits, if they have a sufficient work history. As a result, states may direct SSI/DI advocacy services to people receiving benefits from any of the following programs: TANF: This federal block grant provides funds to states for a wide range of benefits and services, including state cash assistance programs for needy families with children. TANF is administered by HHS’s Administration for Children and Families at the federal level and by state and, in some cases, county agencies. State TANF programs provide temporary, monthly cash payments to low-income families with children while preparing parents for employment. A percentage of each state’s caseload must participate in a minimum number of hours of employment-related activities unless they are exempt. State General Assistance: These programs provide cash assistance to poor individuals who do not qualify for other assistance programs (e.g., they do not have children and are not elderly). As of January 2011, 30 states had General Assistance programs, and most states require individuals to be unemployable generally because of a physical or mental condition. Other State Assistance Programs: Other populations or programs states may target for SSI/DI advocacy include, for example, homeless individuals or individuals receiving state medical assistance or foster care payments. Some states may receive funds from SSA, known as Interim Assistance Reimbursement (IAR), for assistance they provide (i.e., cash assistance provided through state programs like General Assistance to meet basic needs) to an individual who is waiting for approval of SSI benefits. If the individual’s SSI claim is successful, SSA uses the claimant’s past-due benefits to reimburse the state for this interim assistance. States may, in turn, use these funds to finance their SSI/DI advocacy efforts. To qualify for reimbursement, any interim assistance an individual receives while awaiting SSA’s decision must be funded only from state or local funds. Interim assistance payments to a needy individual that contain any federal funds do not qualify for reimbursement. For example, IAR is generally not payable to states for assistance payments related to programs like Medicaid and TANF because the federal government partially funds these programs. To participate in the IAR program, a state must have an IAR agreement with SSA and a written authorization from the individual allowing SSA to reimburse the state from his or her past-due benefits. As of 2014, 36 states and the District of Columbia have IAR agreements with SSA. Little is known about the extent to which states or counties contract for SSI/DI advocacy services. While SSA has oversight of the federal SSI and DI programs, officials told us that they do not know which states or counties are contracting for SSI/DI advocacy services, in part because that information is not necessary to achieve SSA’s mission, which includes delivering retirement, survivor, and disability benefits and services to eligible individuals and their families. While SSA collects some data on representatives working on behalf of claimants, it does not collect information on whether these representatives are working under contract to a state or county. Similarly, HHS has oversight of the federal TANF program and collects information about how states use TANF block grant funds but, according to HHS officials, the agency does not have statutory authority to collect information on states’ contracts for SSI/DI advocacy. In addition to the absence of comprehensive data from SSA and HHS, it is difficult to determine the extent of these contracts nationwide because this practice is diffused among different agencies and different levels of government, depending on the state. Furthermore, we did not identify research that provides a national picture of state SSI/DI advocacy contracting practices. For example, one study we reviewed looked at the overlap between the TANF and SSI populations, but it was not the purpose of the study to examine the extent to which states were contracting for SSI/DI advocacy services. The study did not include recipients of other benefit programs, like state-funded General Assistance, that we found were commonly served by SSI/DI advocacy contracts. Despite limited national-level data, we identified at least 16 states, as of August 2014, that had some type of active contract or grant for SSI/DI advocacy in 2014: California, Colorado, Delaware, Hawaii, Massachusetts, Minnesota, Nevada, New York, Ohio, Oklahoma, Oregon, Pennsylvania, Rhode Island, Tennessee, Virginia, and Wisconsin. (See fig. 3.) Half of the 16 states we identified contracted with multiple organizations in 2014, including for-profit, nonprofit, and legal aid organizations, according to state and county officials we contacted. For example, according to state officials, the Wisconsin Department of Children and Families contracted with 8 organizations (both for-profit and nonprofit) for SSI/DI advocacy services, with each covering different geographic areas, as part of a larger contract for TANF employment support services. At the same time, 7 states reported they had a state contract or grant with a single nonprofit or legal aid organization. For example, Tennessee officials stated they provided a grant to a legal aid organization to work with about 100 TANF recipients per year who may be eligible for federal disability benefits. Within states, we identified SSI/DI advocacy contracts at different levels of government. In several states, we identified only county-level contracts (see fig. 3), and in one state, New York, we identified at least one contract at the state, county, and city level. Specifically, according to state officials, New York had a statewide Disability Advocacy Program that provided grants to a group of nonprofit and legal aid organizations to help individuals appeal their claim after it was initially denied. Westchester County also had a contract with a for-profit organization for SSI/DI advocacy. In addition, officials from New York City’s Wellness, Comprehensive Assessment, Rehabilitation and Employment (WeCARE) program reported that they contract with two nonprofit organizations to provide SSI/DI advocacy services. We also observed recent changes in states’ SSI/DI advocacy contracting practices. We identified multiple states that have ended, or plan to end, their SSI/DI advocacy contracts, and at least one state that is planning to renew a contract it ended several years ago. Several state officials and experts cited reasons for ending or renewing SSI/DI advocacy contracts, including financial considerations. For example, according to state officials, Maryland had a contract for over a decade with an organization to work with TANF recipients who may be eligible for federal disability benefits. The state paid this organization for each disability application submitted; however, state officials told us they ended this contract in 2009 because it was no longer financially practical. According to state officials, in 2014, the state planned to issue a new request for proposals for SSI/DI advocacy that will only pay the contractor for approved claims. Officials told us that they expect that the performance-based compensation structure of the contract will make it financially practical again. In contrast, officials in Delaware told us they had a contract with a single nonprofit organization for about 6 years to work with TANF recipients, but the contract expires in 2014 and will not be renewed due to the relatively low success rate achieved by the contractor. After the contract expires, state employees will provide these services instead, which officials believe will be a better use of resources. Similarly, we identified two additional states that have opted to have state employees provide SSI/DI advocacy services. While state and county SSI/DI advocacy contracts may account for a small proportion of disability claims nationwide, SSI/DI advocacy contracts held by other third parties, such as hospitals and long-term disability insurance companies, may be more prevalent. Since information on SSI/DI advocacy contracts is not available in SSA’s databases, and data on representatives, in general, are limited, we used available data from a 2014 SSA OIG report to estimate the percentage of claims associated with SSI/DI advocacy contracts. Specifically, these data indicate that nonattorney representatives working on behalf of a government entity accounted for an estimated 5 percent of all initial SSI and DI claims with nonattorney representatives adjudicated in 2010. Claims from these government SSI/DI advocacy contracts represent about 1 percent of all initial SSI and DI claims in 2010. By comparison, data indicate that claims associated with contracts held by other third parties—specifically, hospitals and long-term disability insurance companies—were more prevalent, accounting for an estimated 30 percent of initial SSI and DI claims with nonattorney representatives adjudicated in 2010. (See fig. 4.) We selected three sites—Hawaii; Minnesota; and Westchester County, New York—to illustrate different approaches to SSI/DI advocacy, in terms of the number and types of organizations they contracted with and geographic coverage. Despite these differences, however, the three sites were similar in many respects. For example, all three sites articulated a similar goal for their SSI/DI advocacy contracts, targeted similar populations, and generally paid SSI/DI advocacy contractors only for approved claims, among other similarities (see table 1). See appendix II for more detailed information on each site. Each site articulated a two-part goal for its SSI/DI advocacy contract: maximizing assistance for individuals with disabilities while also reducing state or county expenditures. Helping individuals on state or county benefits apply for Social Security disability benefits is allowable under current program rules and may result in greater financial support to individuals and their families if they are eligible. In all three sites, the maximum SSI disability benefit was higher than the maximum benefit provided by General Assistance or TANF. For example, Minnesota officials explained that Minnesota’s General Assistance benefits are lower than SSI. In addition, individuals receiving SSI may also be eligible for other support programs, such as medical assistance and food assistance. At the same time, officials from all three sites told us that moving individuals off state benefit programs and onto federal disability programs has financial benefits for the state or county. As discussed earlier, when the federal government pays the SSI or DI benefits, states can use the funds saved for other purposes, such as expanding services or serving other individuals. All three sites targeted SSI/DI advocacy services to General Assistance and TANF populations. Each site also targeted recipients of at least one other program. For example, in addition to General Assistance and TANF, Minnesota’s contract specified that recipients of a state-funded Group Residential Housing program are eligible for SSI/DI advocacy services. In another example, Westchester County’s contract included children in foster care who may be eligible for SSI. The contractors we selected in the three sites generally reported providing similar services to the state or county, and to claimants, including performing an initial disability screening, assisting with filling out the SSI and/or DI application, and representing the claimant throughout the disability determination process. Each of the contractors reported receiving referrals from sources such as state or county caseworkers or TANF employment services contractors and then screening these individuals to identify those likely to meet Social Security disability criteria. For example, the Westchester County contractor receives monthly lists of individuals receiving General Assistance or TANF benefits who have been determined to be unable to work due to a disability. Contractor officials mail a letter to individuals on these lists, introducing their services and inviting individuals to call their toll-free number to set up an initial screening. Similarly, Hawaii’s SSI/DI advocacy subcontractor reported that, under the new contract, it will receive referrals from the primary state contractor. The screening process varied across contractors; some had structured tools to guide the process while others had a more informal initial intake appointment. The four contractors we selected reported a wide range in the percentage of referrals for which applications were filed, from less than 20 percent for one contractor to over 90 percent for another. Further, contractors reported a range of approval rates, and the contractor that likely filed applications for the smallest percentage of referred individuals reported achieving the highest approval rates at SSA (over 80 percent) of the contractors for which we obtained data. However, there are a number of factors contributing to these rates that we could not examine, such as the nature and quality of the referrals and the level of the claimant’s participation in the process. Two of the contractors noted that screening out obviously ineligible individuals benefits SSA in that the contractors are not contributing to SSA workloads by submitting claims unlikely to be approved. After the contractors determine that an individual is potentially eligible for federal disability benefits, they assist him or her with completing an application for SSI and/or DI. With the claimant’s permission, staff from these organizations also become the claimant’s appointed representative, which allows the staff person to interact with SSA on behalf of the claimant during the disability determination process. Representatives from these organizations told us they generally focus on gathering and summarizing available medical evidence rather than providing referrals to doctors and specialists to obtain new medical evidence. The contractors reported that they generally file concurrent applications for SSI and DI. They generally file the DI application online, but they differed in how they filed the SSI application. Two of the organizations we selected—the for- profit contractor in Minnesota and the contractor in Westchester County— reported filling out the SSI application on the claimant’s behalf, while the other two organizations reported sending or accompanying the claimant to the SSA field office to file the application. The organizations also reported supporting claimants up to the hearings and Appeals Council levels, if necessary. See table 2 for a comparison of the SSI/DI advocacy services the contractors in our three selected sites reported providing. The representatives in each site generally reported interacting frequently with local SSA field offices and, to a lesser extent, the state DDS, in conducting their SSI/DI advocacy work. For example, the for-profit contractor we selected in Minnesota had offices across the street from SSA’s Minneapolis field office, and representatives from this contractor reported hand-delivering SSI paper applications. In another example, officials from the Westchester County contractor reported having good working relationships with all of the SSA field offices in the county, noting that their representatives typically talk with field office staff daily by phone. Staff we interviewed in each of the local field offices we selected generally had positive feedback on their interactions with representatives from the selected SSI/DI advocacy contractors. For example, they noted that the representatives are helpful and easier to get in touch with or more responsive than other representatives. In addition, staff we interviewed generally said that claims submitted by these representatives are of equal or better quality than claims submitted by other representatives. In general, the DDS staff we interviewed did not express an opinion on the responsiveness of the representative or on the overall quality of claims. In each site, SSI/DI advocacy contractors were generally paid only for disability claims that SSA approved. Payments ranged from $900 to $3,000 per approved claim. One site paid the same amount for an approved claim, regardless of the level of the adjudication process in which it was approved, while contractors in two sites were paid higher amounts for claims approved at the reconsideration and/or hearings or Appeals Council levels. Two of the sites—Minnesota and Westchester County, New York—also offered payments for assisting claimants undergoing continuing disability reviews, which SSA conducts to determine whether individuals receiving benefits continue to meet program disability requirements. Hawaii was unique among the three sites in that the state paid the primary contractor a set monthly fee but the primary contractor paid the SSI/DI advocacy subcontractor per approved claim. The relatively “flat fee” compensation structure in the SSI/DI advocacy contracts differs from SSA’s direct payment structure and may create an incentive for representatives to submit claims that can be favorably decided in a more timely manner. Whereas selected SSI/DI advocacy contractors’ fees are a set amount, regardless of how long it takes to decide a claim, under the Social Security Act eligible representatives can elect to be paid by SSA directly out of a claimant’s past-due benefits and potentially earn more when claims take longer to be approved. Their fee is a maximum of 25 percent of the past-due benefits for approved claims, up to $6,000. All sites at least partially offset the costs of their advocacy contracts with federal Interim Assistance Reimbursement (IAR) funds from SSA. In two of the sites—Hawaii and Minnesota—officials reported that they received more IAR money than they spent on their SSI/DI advocacy contracts. Through the IAR program, SSA reimburses participating states for the assistance they provided to individuals while awaiting the approval of SSI benefits. In order for the state to receive reimbursement, the state must have the claimant sign a written authorization that allows the state to be paid out of the claimant’s past-due benefits. The number of individuals moved onto federal disability programs as a result of the SSI/DI advocacy contracts in all three sites accounted for a small percentage of the total number of approved SSI and DI claims in their respective states or county. Specifically, Minnesota was the largest of the three sites in all respects: amount paid under the contract, geographic reach, and number of approved claims. Yet the 1,112 claims approved statewide in state fiscal year 2013 was relatively small compared to the roughly 24,000 disability claims approved by SSA in the state in calendar year 2012, the most recent year available. Similarly, Hawaii and Westchester County’s 342 and 136 claims approved in state fiscal or contract year 2013, respectively, each represented small proportions of all disability claims approved by SSA in the state or county in calendar year 2012. SSA has a number of controls in place—including rules and regulations—related to appointed representatives in the disability determination process, but it does not have controls specific to organizations providing SSI/DI advocacy services to states and other third parties. SSA’s existing controls over representatives include broad guidelines regarding who may represent disability claimants, including qualifications for attorneys and nonattorneys. SSA regulations also set forth specific rules of conduct that apply to all representatives. For example, representatives are required, with reasonable promptness, to obtain evidence in support of the claim, submit such evidence as soon as practicable, help claimants respond to requests for information from SSA as soon as practicable, and to be familiar with relevant laws and regulations. Representatives are prohibited from, among other things, knowingly collecting any fees in violation of applicable law or regulation. In addition, nonattorney representatives who wish to be eligible for direct payment of their fees out of a claimant’s past-due benefits also must satisfy a number of statutory criteria. Nonattorney representatives who do not wish to be eligible for direct payment of their fees, such as those waiving direct payment and working under contract to a state or county, do not have to satisfy these criteria but are still required by SSA’s regulations to be capable of giving valuable help to claimants and to have good character and reputation. SSA’s controls apply to individual representatives, and not to the organizations they work for, including those under contract to states or other third parties, because SSA only conducts business with and recognizes individuals as representatives. In 2008, SSA issued proposed rules that would have recognized organizations, in addition to individuals, as representatives. In other words, under the proposed rules a claimant could appoint an organization or firm to represent them rather than a single individual from that organization. In the proposed rules, SSA stated that the business practices of those who represent claimants have changed, and many representatives practice in group settings and provide their services collectively to claimants. However, the agency did not issue final rules on this topic. SSA officials told us that they still believe that having organizations serve as appointed representatives would be beneficial, but the agency would face challenges implementing this change, including modifying SSA’s current data systems. SSA also does not have readily available data on representatives, particularly those paid by third parties. Specifically, SSA’s current data on representatives are limited, kept in separate systems, and are not used to monitor or report trends on claims with representatives (see table 3). In particular, SSA collects less information about representatives the agency does not directly pay out of claimants’ past-due benefits, and information on these representatives is not tracked in SSA’s data systems. Federal government internal control standards state that agencies should have adequate access to timely data and information, and mechanisms in place for routinely assessing risks related to interactions with entities and parties outside the government that could affect agency operations. In order to make timely and accurate decisions, identify trends, and assess risks—including those related to program integrity—SSA needs ongoing and up-to-date information on representatives. This is particularly important given that representatives have become increasingly involved at the initial levels of the disability determination process, according to our analysis of SSA data. SSA has several efforts under way to improve its collection and use of data as well as its ability to assess risks related to representatives. First, SSA recently initiated the Registration, Appointment, and Services for Representatives project, with the goal of providing staff more accurate, up-to-date information about the representatives who assist claimants in the disability process. SSA officials stated that the agency currently captures information on representatives in separate, stand-alone systems that are not well-integrated, which has resulted in concerns about payment inefficiencies and privacy. SSA plans to integrate information from the various systems on representatives, creating one system as the sole source for information on representatives. SSA officials told us that the agency may identify new data elements related to representatives to capture in the system, such as the organizations they are associated with, but there currently is no plan to collect this information. Another facet of this initiative involves giving representatives expanded access to the disability eFolder, SSA’s electronic system containing all of the documents pertaining to a disability claim. Once implemented, authorized and registered representatives will have the ability to view documents for their clients contained in the eFolder and download and print them. Officials from two professional organizations of representatives and some SSA staff we interviewed reported that giving representatives access to the eFolder would be beneficial. By requiring representatives to register to gain access, SSA could gather more information on representatives. According to SSA’s vision statement for this project, successful implementation would provide SSA more readily available data—and enhanced abilities to respond to management requests for information—on representatives. However, as of September 2014, SSA officials reported that this project is in the early planning phase, future funding is uncertain, and no timeline for completion has been established. Enhanced collection and use of data on appointed representatives may also be important for planned initiatives related to the detection of potential fraud. SSA is in the early stages of exploring computerized tools to enhance efforts to systematically detect potential fraud. Using data from recent alleged fraud cases involving representatives, SSA plans to use computer analytics to examine various characteristics of disability claims and determine those which may be fraudulent. Known as predictive analytics, these computer systems and tools can help identify patterns of potentially fraudulent disability claims. However, as discussed earlier, SSA does not consistently collect some data that may aid in its analytics effort, such as information on the organizations or firms with which individual representatives may be associated. The absence of readily available data on representatives hinders SSA’s ability to detect patterns of potential fraud. Specifically, SSA’s current data systems do not allow staff to identify, in a timely manner, large volumes of claims with the same representative and the same impairments, which can be a risk factor for potential fraud, according to SSA officials we interviewed. SSA does not coordinate its direct payments to representatives with states and other third parties that might also pay representatives. As a result, it is possible that both SSA and a state or third party could pay the representative, resulting in more than one payment. More specifically, under the current system of payments, a representative working under contract to a state could (1) request direct payment from SSA (deducted from the claimant’s past-due benefits) for representing a particular claimant, and (2) also submit an invoice to the state requesting payment under the terms of the SSI/DI advocacy contract. Generally, SSA prescribes the maximum fee allowed, and representatives may not knowingly collect more than the fee that SSA authorizes them to receive for a case. However, we found that in cases involving SSI/DI advocacy payments, representatives might be able to collect payments from the state as well as through SSA fee withholding, totaling more than the authorized amount. Unless SSA and the state or other third party share information on their payments or have policies and procedures in place to prevent such cases, representatives may receive both SSA and state payments that total more than the SSA-authorized fee. In 2007, we reported on this risk of overpayments to representatives and recommended that SSA take steps to address it. However, SSA has not fully implemented our recommendation because SSA did not know which states were paying representatives or the true extent of the problem, according to a senior agency official. SSA has taken some steps to clarify authorized payments for representatives. For example, in 2011, SSA revised the form a claimant uses to appoint a representative (form 1696) to more clearly indicate how a representative would like to be paid. Specifically, the updated form requires representatives to declare whether they intend to be paid by (1) the claimant directly,(2) SSA, out of the claimant’s past-due benefits, or (3) a third party. (See fig. 5.) Although the revised form more clearly delineates allowable fee arrangements, SSA officials acknowledged that this overpayment vulnerability still exists. Officials told us that the agency would not know if a representative was paid from another source outside SSA. The agency is dependent upon the claimant or the third party to inform SSA about an overpayment to a representative. Although the updated appointment form makes it more clear that representatives must choose one type of fee arrangement, some SSA staff we interviewed reported that claimants often do not fully understand the forms they are signing or the implications. One state we studied has developed practices in an attempt to avoid these types of overpayments, but these practices are not universal. Officials in Minnesota stated that they recently began requiring contracted organizations to submit copies of their signed form 1696 so the state could verify that the representatives checked the appropriate box for payment. By looking more closely at the award notices SSA sends to claimants and representatives, state officials reported discovering three instances in 2014 when a representative did not check the appropriate box to waive direct payment from SSA and could have received an overpayment. Minnesota officials plan to work with a local SSA field office to conduct an audit of a sample of claims to identify such cases. According to a Minnesota official, this effort would begin in December 2014. Officials we interviewed in the other two selected sites reported that they do not require representatives from contracted organizations to submit these signed SSA forms, nor did they have plans to audit claims to detect overpayments. SSA does not systematically coordinate with states and other third parties on the payment of representatives. For example, SSA has not issued guidance to states or third parties or shared any best practices on preventing overpayments. SSA and state officials in Minnesota reported that as SSA expands representative access to the eFolder during the disability determination process, providing controlled third party access could efficiently facilitate the detection of potential overpayments. For example, states could use their access to portions of the eFolder to easily check the form 1696 submitted by the representative and any additional documents, such as fee agreements, to prevent overpayment. However, SSA can only provide access to an eFolder if it is permissible under federal privacy laws. In general, coordination is important because the risk of overpayment goes beyond the 16 states we identified with state or county SSI/DI advocacy contracts. As discussed earlier, we estimated that about 30 percent of all initial disability claims with nonattorney representatives are potentially associated with SSI/DI advocacy contracts held by other third parties, such as hospitals and long-term disability insurers. SSI/DI advocacy, while serving a practical purpose for states, counties, and individuals, raises questions about the role third parties and representatives play in the disability determination process. Many of these questions—such as the extent of SSI/DI advocacy and the impact of this practice—cannot be answered because so little data exist. Since representatives are increasingly involved in this process and are working on behalf of a diverse set of third parties, it is critical that SSA management and employees have mechanisms for monitoring trends and patterns related to claims with representatives. SSA anticipates being able to combine data across its systems in order to evaluate data variations on representatives but those plans are under development. SSA’s current efforts also face a number of uncertainties which, if left unaddressed, may undermine the agency’s ability to improve data on representatives. In the absence of readily available data—particularly data on those representatives paid by third parties—SSA is poorly positioned to identify trends or patterns that may present risks to program integrity. One such risk is making overpayments to representatives who are also being paid by third parties. SSA has not taken steps to adequately eliminate this vulnerability. Without enhanced coordination between SSA and third parties, some representatives may improperly receive payments. This financial vulnerability presents a strong case for enhanced oversight over representatives in the disability determination process. As part of initiatives currently under way to improve agency information on claims with appointed representatives and detect potential fraud associated with representatives, the Commissioner of the Social Security Administration should consider actions to provide more timely access to data on representatives and enhance mechanisms for identifying and monitoring trends and patterns related to representation, particularly trends that may present risks to program integrity. Specifically, SSA could: Identify additional data elements, or amendments to current data collection efforts, to improve information on all appointed representatives, including those under contract with states and other third parties; Implement necessary policy changes to ensure these data are collected. This could include enhancing technical systems needed to finalize SSA’s 2008 proposed rules that would recognize organizations as representatives; and Establish mechanisms for routine data extracts and reports on claims with representatives. To address risks associated with potential overpayments to representatives and protect claimant benefits, the Commissioner of the Social Security Administration should take steps to enhance coordination with states, counties, and other third parties with the goal of improving oversight and preventing and identifying potential overpayments. This coordination could be conducted in a cost-effective manner, such as issuing guidance to states and other third parties on vulnerabilities for overpayment; sharing best practices on how to prevent overpayments; or considering the costs and benefits, including any privacy and security concerns, of providing third parties controlled access to portions of the eFolder to facilitate the detection of potential overpayments. We provided a draft of this product to the Social Security Administration (SSA) and the Department of Health and Human Services (HHS) for comment. SSA and HHS provided technical comments, which we have incorporated as appropriate. In its written comments, reproduced in appendix III, SSA partially agreed with our two recommendations and raised its overall concern that our report misrepresents and overstates the nature of states’ payments to representatives. The agency did not provide any further support for this assertion; it is unclear the basis on which SSA could make this statement, given that officials repeatedly told us during the course of our work that the agency has no information or data on states’ contracts. Our report makes it clear that the full extent of states’ and counties’ SSI/DI advocacy practices is unknown, given the absence of national-level data. Given these limitations, we believe that our work fairly and accurately describes what is known about the extent of SSI/DI advocacy contracts and payments nationwide. SSA also noted that our report did not address other types of SSI/DI advocacy contracts, such as those held by insurance companies. Indeed, it was not within the scope of our report to do so. We did note that other types of SSI/DI advocacy contracts—such as those held by insurance companies or hospitals—represented an estimated 30 percent of initial disability claims with nonattorney representatives in 2010. The prevalence of these SSI/DI advocacy contracts, and the growing involvement of representatives at the initial disability determination level, presents a strong case for SSA to have greater information on these third parties and the payments they may receive. SSA partially agreed with our first recommendation to consider actions to provide more timely access to data on representatives and enhance mechanisms for identifying and monitoring trends and patterns related to representation. SSA acknowledged that the report accurately describes initiatives the agency has underway to improve the use and collection of data related to representatives. SSA stated that, as part of these efforts, the agency may identify additional data elements that may be helpful to collect and consider any necessary policy changes. SSA raised concerns, however, that expanding data collection to a more detailed level could negatively affect other agency priorities. We fully acknowledge that SSA has competing priorities and limited resources. With this in mind, we wrote the recommendation to provide SSA flexibility in implementation, including suggesting that the agency leverage current initiatives. We continue to believe that SSA should consider steps to improve available data on appointed representatives to better monitor the involvement of these third parties in the disability determination process. SSA partially agreed with our second recommendation to take steps to enhance coordination with states, counties, and other third parties with the goal of improving oversight and preventing and identifying potential overpayments. In its general comments, SSA stated that its rules allow representatives to receive fee payments, and that any payments made by states are outside of SSA’s authority for oversight purposes. SSA also stated that our report did not provide sufficient evidence to warrant enhanced coordination and noted that the agency takes the necessary actions to recoup fees when it learns of a potential fee violation. Our report notes, however, that SSA is dependent upon the claimant or the third party to inform SSA about an overpayment to a representative. In our audit work in selected states, we also noted three instances when a representative attempted to be paid by SSA and the state. While we recognize that payments made by states to representatives are outside of SSA’s jurisdiction, SSA has established rules of conduct for representatives, and these rules prohibit a representative from collecting fees over the amount SSA has authorized. Enhanced coordination could increase SSA’s and third party payers’ ability to detect potential overpayments. Finally, SSA suggested that we explicitly state in our report that we did not find any indications of fraud committed by representatives working under contracts to states or other third parties (referred to by SSA in its comments as “facilitators”). The objectives of this work were focused on (1) identifying the extent to which states are involved in SSI/DI advocacy, (2) examining different approaches to this work, and (3) assessing the key controls that SSA has in place to ensure that organizations working under contract to states and other third parties follow program rules and regulations. As such, we did not have any findings on the extent of any possible fraudulent activity associated with these SSI/DI advocacy contracts. We do note in the report, however, that SSA field office staff we interviewed in our three selected sites generally had positive feedback on their interactions with representatives working under contract to the state or county, and that claims they submitted were of the same or better quality than claims submitted by other representatives. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of the Department of Health and Human Services, the Commissioner of the Social Security Administration, and other interested parties. In addition, the report will be made available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. In conducting our review of state Supplemental Security Income (SSI)/Disability Insurance (DI) advocacy practices, our objectives were to examine (1) what is known about the extent to which states are contracting with private organizations to identify and move eligible individuals from state- or county- administered benefit programs to Social Security disability programs, (2) how SSI/DI advocacy practices compare across selected sites, and (3) the key controls the Social Security Administration (SSA) has in place to ensure these organizations follow SSI/DI program rules and regulations. We conducted this performance audit from September 2013 through December 2014 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides a detailed account of the data sources used to answer these questions, the analyses we conducted, and any limitations we encountered. The appendix is organized into three sections. Each section presents the methods we used for the corresponding objective. Specifically, section I describes the information sources and methods we used to identify state SSI/DI advocacy contracts, estimate the proportion of claims associated with these contracts, and analyze national trends in claims with representatives. Section II describes the information sources and methods we used to explore selected SSI/DI advocacy approaches. Section III describes the information sources and methods we used to assess SSA’s policies and controls related to representatives. To determine the extent to which states are contracting with private organizations for SSI/DI advocacy services, we used a multi-faceted approach. Due to the absence of national-level data on SSI/DI advocacy contracts, we combined information from various sources. Specifically, we analyzed data from SSA’s Office of the Inspector General (OIG); performed independent research, including conducting Internet searches and following up on contracts identified in past GAO work; and interviewed government officials, representatives from organizations providing SSI/DI advocacy services, and a wide range of stakeholders and experts. We used data from a 2014 report issued by SSA’s OIG to estimate the percentage of initial claims in 2010 with nonattorney representatives working under a government SSI/DI advocacy contract, as well as the percentage that were potentially working under contract with another third party, such as a hospital or long-term disability insurance company. As part of its report, the OIG selected a random sample of 275 SSI and DI adjudicated claims from the population of 857,855 adjudicated claims with a representative in calendar year 2010, 201 of which were for initial claim determinations. Of these 201 initial claim determinations, 83 were represented by nonattorney representatives, while the remainder had attorney representatives. The OIG used information in the claim files, as well as Internet research, to determine the type of nonattorney representative associated with each sampled claim. The OIG did not conduct similar work for claims with attorney representatives. We independently reviewed and verified the OIG’s work papers for the sampled claims with a nonattorney representative, including selected documents from the electronic claim files. To verify that the OIG’s categorizations of the type of representative were correct, we completed a blind categorization of the type of representative involved in each claim (that is, we completed our own categorization of the type of representative, without first reviewing the OIG’s determination) for the sample of 83 cases. A second analyst then confirmed the categorization. We discussed any discrepancies between our categorizations and the OIG’s with the OIG staff who performed the work. We obtained additional information about the claim in several cases and documented the final categorization. Using methods appropriate for a simple random sample, we estimated the percentage of initial claims with determinations in 2010 with nonattorney representatives working under SSI/DI advocacy contracts with government entities, as well as the percentage that were potentially working under contract with another third party, such as a hospital or long-term disability insurance company. Because the sample was selected using a probability procedure based on random selections, the sample is only one of a large number of samples that might have been drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95-percent confidence interval (e.g., plus or minus 7 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. All estimates in this report have a margin of error, at the 95-percent confidence level, of plus or minus 10 percentage points or fewer. Based on our discussions with the OIG and our verification process, we determined that the estimates were sufficiently reliable for the purposes of this report. We also analyzed SSA data extracted from the Appointed Representative Database, the Modernized Claims System, and the Supplemental Security Income Record, for calendar years 2004-2013 to provide information regarding total SSI and DI claims as well as claims with attorney and nonattorney representatives, as context for our findings. We interviewed SSA officials regarding these data and reviewed the computer code SSA used to extract these data, and determined they were sufficiently reliable for these purposes. To identify states and counties that were likely to have an SSI/DI contract, we followed up on prior GAO work and performed Internet research. Specifically, we contacted officials in the states that, in 2007, reported paying representatives to assist individuals with their SSI claims to determine if these payments were part of a contract and, if so, if the contract was still in place as of 2014. We also performed an Internet search to identify additional SSI/DI advocacy contracts or requests for proposals. Using a uniform set of search terms, we performed this search for all states (and the District of Columbia) for which we did not have information regarding their potential SSI/DI advocacy contracting activity from our interviews (see below). We confirmed the status of these contracts or proposals with state, county, or city officials, as appropriate. To supplement our data analyses and Internet searches, we conducted interviews with a number of stakeholders to learn more about this contracting practice and obtain leads for states that may have current SSI/DI advocacy contracts. Specifically, we interviewed officials from SSA and the Department of Health and Human Services (HHS) to determine what information each agency collected and maintained regarding state contracts for SSI/DI advocacy. Through these interviews, we also explored what other data were readily available that could be used to determine the extent of this contracting practice. To obtain leads on potential state or county contracts, we worked with two professional groups—the National Association of State TANF Administrators and the National Council of Disability Determination Directors—who contacted their members on our behalf. With regard to state or county contracts identified through these interviews and from information provided through these professional groups, we followed up directly with state or county officials to confirm this information. To learn more about this contracting practice and obtain leads for states that may have current SSI/DI advocacy contracts, we also interviewed researchers at academic and advocacy organizations. These included: American Enterprise Institute American Public Human Services Association Center on Budget and Policy Priorities Center for Law and Social Policy Consortium for Citizens with Disabilities Federal Reserve Bank of San Francisco Mathematica Policy Research MDRC National Association of Disability Examiners National Association of Disability Representatives National Association of State TANF Administrators National Council of Disability Determination Directors [representing state Disability Determination Services (DDS) directors] National Council of Social Security Management Associations (representing SSA field office and teleservice center managers) National Organization of Social Security Claimants’ Representatives Social Security Advisory Board In addition, we interviewed representatives from organizations that, based on our preliminary audit work, were providing SSI/DI advocacy services to states or counties. These included Chamberlin Edmonds, the Legal Aid Society of Hawaii, MAXIMUS, Public Consulting Group, and South Metro Human Services. We also interviewed officials from Policy Research Associates, which provides technical assistance, under a contract to the Substance Abuse and Mental Health Services Administration, for the national SSI/SSDI Outreach, Access and Recovery (SOAR) program. In order to obtain in-depth information on the different ways in which states and counties contract with private organizations for SSI/DI advocacy services, we selected a nongeneralizable sample of three sites with SSI/DI advocacy contracts that had an established history of contracting for SSI/DI advocacy services and represented a variety of approaches. We also selected one state in which the Temporary Assistance for Needy Families (TANF) administering agency and the state DDS were divisions under the same state agency, in light of concerns about potential conflicts of interest (the agency issuing the contract to help people apply for federal disability benefits is under the same state agency as the agency making the decision about eligibility for federal disability benefits). Specifically, we selected (1) a state that contracts with a nonprofit, legal aid organization (Hawaii), (2) a state that contracts with multiple organizations, including for-profit, nonprofit, and legal aid organizations (Minnesota), and (3) a county that contracts with a for-profit company (Westchester County, New York). In each site, we obtained key documents—such as the request for proposals and the signed, current contracts—and data in order to describe the various aspects of the sites’ SSI/DI advocacy practices. For example, we gathered information on how the states or county and their contractors identified potentially eligible individuals, the types of services provided by the organizations to claimants, compensation structures, and other information. We obtained data on the total amounts paid to the contractors in state fiscal year or contract year 2013. We also obtained information on how the site funds its SSI/DI advocacy contracts, and whether any funding was provided through an Interim Assistance Reimbursement (IAR) agreement with SSA. We collected and analyzed available data from the three sites on the number of individuals referred to the contractor and the number of claims filed and approved by SSA in state fiscal year 2013, or the most recent complete year available. We interviewed state/county and contractor officials knowledgeable about the data and compared states’/counties’ and contractors’ reported data and determined the data were sufficiently reliable for our purposes. To put these sites’ data on approved claims in context, we also obtained data from SSA on the number of SSI and DI approved claims in each state or county in calendar year 2012, the most recent year these federal data were available. In each site, we also conducted in-depth interviews with (1) the government agency administering the contract, (2) officials from the organization(s) working under the contract, (3) SSA officials in the relevant regional office and at least one field office, and (4) state DDS administrators and staff. In the field offices and state DDSs, we randomly selected staff to interview who met certain qualifications. We conducted these interviews either in person or by phone. We also contacted the state auditors for each state, and in all three sites, they confirmed they had no current work regarding SSI/DI advocacy contracting, nor had they done any work in this area within the past 10 years. Prior to issuing this report, we shared a statement of facts with officials from the state or county agency and the selected contractor(s) in the three sites to confirm that the key information used to formulate our analyses and findings were current, correct, and complete. These entities provided technical comments, which we incorporated, as appropriate. In order to assess the controls SSA has in place related to representatives contracted by third-party organizations to perform SSI/DI advocacy, we reviewed relevant documents and reports, and conducted interviews with key officials from SSA. We reviewed relevant federal laws; proposed and final regulations; program policies and procedures, such as SSA’s Program Operations Manual System; and other program documentation, as well as reports and testimonies from SSA, SSA’s OIG, and the Social Security Advisory Board. We compared SSA’s efforts with their own policies and procedures, federal government internal control standards, and prior recommendations from GAO and the Social Security Advisory Board. To understand SSA’s policies, procedures, and data controls related to appointed representatives, we interviewed officials in a number of SSA departments in headquarters. These included: Office of Disability Adjudication and Review Office of Disability Determinations Office of Disability Programs Office of Income Security Programs Office of the Inspector General Office of Research, Evaluation, and Statistics Office of Retirement and Survivors Insurance Systems To gain additional perspectives on how SSA policies are implemented and challenges regarding appointed representatives in the disability determination process, particularly those under contract to a state or county, we incorporated relevant questions into the interviews conducted in our three selected sites. Also, as noted above, we interviewed representatives from national organizations representing SSA field office managers, administrative law judges, DDS administrators, and DDS examiners. Approach to SSI/DI advocacy In the beginning of 2014, Hawaii had a contract with a legal aid organization to provide Supplemental Security Income (SSI)/Disability Insurance (DI) advocacy services statewide. In July 2014, this organization became a subcontractor to a company that performs medical and psychological evaluations for the state’s cash assistance programs. Specifically, the primary contractor is responsible for determining whether applicants and recipients of the state’s General Assistance (GA) and Temporary Assistance for Needy Families (TANF) programs have disabilities that prevent them from engaging in work at a certain level. Previously, the state had two separate contracts for SSI/DI advocacy and medical and psychological evaluations. State officials told us that they combined those services into a single contract, in part, to streamline the referral process for SSI/DI advocacy. If the primary contractor determines that an individual’s disability meets Social Security criteria, they refer the individual directly to their advocacy subcontractor rather than indirectly through state caseworkers, as was done under the prior contract. Disability screening process Previously, a prospective claimant could be referred by a state caseworker or walk into the legal aid office. Referrals now come from the primary contractor. Hawaii’s SSI/DI advocacy subcontractor told us they conduct a screening assessment to obtain basic information—such as information on the individual’s impairments, the doctors they have seen, and medications they are taking—and have the claimant sign key forms, including the Social Security Administration (SSA) form required to formally appoint the advocacy worker as their representative. If an individual does not appear eligible for federal disability benefits, the representative would decline to officially represent them but might provide some assistance. Assistance filing a claim Hawaii’s SSI/DI advocacy subcontractor reported that most representatives fill out available portions of the SSA disability application online, such as the DI portion. They call the local SSA field office to schedule an appointment for the claimant to meet with an SSA claims representative to complete the SSI portion of the application, which is not available online. They said representatives typically do not accompany the claimant to the field office, nor do they refer claimants to doctors or medical specialists. Representation during the disability determination process The advocacy subcontractor reported that its representatives will provide additional information to SSA or the state Disability Determination Services (DDS) on the claimant’s disabilities or functioning, upon request. The representative may also check to ensure the claimant attends any examinations scheduled by the DDS. If an initial application is denied, the representative may schedule another appointment with the claimant to review the case and determine whether to file a reconsideration or, later, an appeal. Approach to SSI/DI advocacy In 2014, Minnesota contracted with 55 organizations across the state, ranging from small law firms to large for-profit and nonprofit organizations. Some organizations served individuals statewide, while others served specific geographic areas or populations, such as tribal communities. Minnesota’s request for proposals for SSI/DI advocacy services had two components: one for its general SSI/DI advocacy program and another for its SSI/SSDI Outreach, Access, and Recovery (SOAR) program. Minnesota’s SOAR program is based on a national advocacy model that focuses on homeless individuals or individuals at risk of homelessness who have a mental illness and/or a co-occurring substance abuse disorder. Organizations could submit proposals to provide services under one or both components. Minnesota offered higher payments under the SOAR program because, according to state officials, the homeless population requires more intensive services. Specifically, the state provided a $2,500 payment for approved applications that included a complete medical summary report—a key component of the SOAR model. Disability screening process Officials at the for-profit contractor we selected—operating under the SSI/DI advocacy component of the contract—reported that they receive referrals from county or hospital caseworkers. Officials at the nonprofit contractor we selected—operating mainly under the SOAR component of the contract—reported that it receives informal referrals from staff at homeless shelters or mental health or urgent care clinics. The for-profit officials also reported having limited access to a state database, which allows them to verify that a referred individual is a recipient of one of the eligible state programs. Both organizations conduct initial screenings to obtain information, such as the individual’s impairments and work history. The nonprofit organization also gathers information on the individual’s history of homelessness. If it appears that the individual will meet Social Security disability criteria, both organizations’ staff reported that they will meet with the claimant to fill out the application and sign key forms, including the form required to formally appoint the SSI/DI advocacy staff as their representative. Assistance filing a claim Representatives from both organizations reported filling out available portions of the application online, such as the DI portion, but they differed in how they completed the SSI portion of the application, which is not available online. Representatives from the for-profit organization fill out the SSI application on behalf of the claimant and either mail or hand-deliver it to the local SSA field office. Representatives from the nonprofit organization typically accompany the claimant to the field office to complete the application and often provide transportation to ensure the claimant attends the appointment. Representatives from both organizations said they typically gather available medical information but refer the claimant to medical providers or specialists, as needed, if the existing records are insufficient. The nonprofit organization also has a psychologist on staff to perform evaluations and psychological testing if existing records are insufficient. Representation during the disability determination process Representatives from both organizations work with the claimant to ensure he or she attends any examinations the DDS schedules and provide the DDS, upon request, with additional information on the claimant’s disabilities or functioning. If an initial application is denied, the representatives will review the case with the claimant and determine whether to file a reconsideration or, later, an appeal. Approach to SSI/DI advocacy Westchester County’s contractor, a national for-profit organization, performed its SSI/DI advocacy services from its office in another state. Officials from Westchester County and the organization told us that providing services by phone can be particularly beneficial for individuals with severe disabilities. Approved claims: 136 Contractor(s) For-profit company Compensation structure Payment for each approved claim: $3,000 (adult disability claim) $2,000 (foster care SSI claim) $1,500 (CDR) Disability screening process Westchester County’s SSI/DI advocacy contractor reported that it receives referrals on a monthly basis from the county’s three employment services contractors. According to county officials, these contractors identify people receiving GA or TANF who are unable to work for reasons such as a disability, and provide lists of these people to the SSI/DI advocacy contractor. The advocacy contractor mails a letter to each referred individual, introducing their services and inviting them to call a toll-free number to determine their potential eligibility for Social Security disability benefits. During this screening, a representative from the organization gathers information on the individual’s current medical condition, work history, and educational level. If it appears that the individual will meet Social Security disability criteria, the representative will fill out the application and have the claimant sign key forms, including the form required to formally appoint the SSI/DI advocacy worker as their representative. Targeted populations GA (known as Safety Net Assistance) Assistance filing a claim Officials from the advocacy organization said that representatives fill out available portions of the disability applications online, such as the DI application. The representative also fills out the SSI application on behalf of the claimant and mails it to the appropriate SSA field office. Representatives gather available medical information, but do not refer claimants to additional doctors or specialists. Instead, if claimants have a limited medical history, the representatives will refer them to the county for treatment or request that their physicians provide treatment notes or an assessment of their functioning. Representation during the disability determination process Representatives work with the claimant to ensure he or she attends any examinations the DDS schedules and provide the DDS with additional information on the claimant’s disabilities or functioning, upon request. If an initial application is denied, the representative will review the case and schedule a telephone appointment with the claimant to discuss options and determine whether to file a request for a hearing. Daniel Bertoni, Director, (202) 512-7215 or bertonid@gao.gov. In addition to the contact named above, Erin Godtland (Assistant Director), Rachael Chamberlin (Analyst-in-Charge), Julie DeVault, Alison Grantham, and Michelle Loutoo Wilson made key contributions to this report. Additional contributors include: James Ashley, James Bennett, David Chrisinger, Rachel Frisk, Alexander Galuten, Monika Gomez, Kimberly McGatlin, Daniel Meyer, Matthew Saradjian, Monica Savoy, Almeta Spencer, Nyree Ryder Tee, Shana Wallace, Margaret Weber, and Candice Wright.
For years, states and counties have helped individuals who receive state or county assistance apply for federal disability programs. Federal benefits can be more generous, and moving individuals to these programs can allow states and counties to reduce their benefit costs or reinvest savings into other services. Some states have hired private organizations to help individuals apply for federal benefits, but the extent and nature of this practice is not well-known. GAO was asked to study this practice. This report examines (1) what is known about the extent to which states have SSI/DI advocacy contracts with private organizations, (2) how SSI/DI advocacy practices compare across selected sites, and (3) the key controls SSA has to ensure these organizations follow SSI/DI program rules and regulations. GAO reviewed relevant federal laws, regulations, and program rules; selected three sites to illustrate different contracting approaches; reviewed prior studies, including one by SSA's Office of the Inspector General with a generalizable sample of disability claim files; and interviewed SSA, state, and county officials and contractors. Little is known about the extent to which states are contracting with private organizations to help individuals who receive state or county assistance apply for federal disability programs. Representatives from these private organizations help individuals apply for Supplemental Security Income (SSI) and Disability Insurance (DI) from the Social Security Administration (SSA). Available evidence suggests that this practice—known as SSI/DI advocacy—accounts for a small proportion of federal disability claims. Using a variety of methods, including interviewing stakeholders, GAO identified 16 states with some type of SSI/DI advocacy contract in 2014. In addition, GAO analyzed a sample of 2010 claims nationwide and estimated that such contracts accounted for about 5 percent of initial disability claims with nonattorney representatives, or about 1 percent of all initial disability claims. Representatives working under contract to other third parties, such as private insurers and hospitals, accounted for an estimated 30 percent of initial disability claims with nonattorney representatives. Three selected sites represented different approaches to SSI/DI advocacy, but were similar in many respects. For example, Minnesota contracted with 55 nonprofit and for-profit organizations, while Hawaii and Westchester County, New York, each had a single contractor: a legal aid organization, and a for-profit company, respectively. At the same time, all three sites targeted recipients of similar state and county programs, such as General Assistance, and generally paid contractors only for approved disability claims, among other similarities. SSA has controls to ensure representatives follow program rules and regulations, but these controls are not specific to those working under contract to states or other third parties and may not be sufficient to assess risks and prevent overpayments—known by SSA as fee violations. Specifically: Despite the growing involvement of different types of representatives in the initial disability determination process, SSA does not have readily available data on representatives, particularly those it does not pay directly. This hinders SSA's ability to identify trends and assess risks, a key internal control. SSA's existing data are limited and are not used to provide staff with routine information, such as the number of claims associated with a given representative. SSA has plans to combine data on representatives across systems, but these plans are still in development. SSA does not coordinate its direct payments to representatives with states or other third parties that might also pay representatives, a risk GAO identified in 2007. In cases involving SSI/DI advocacy contracts, a representative may be able to collect payments from both the state and from SSA, potentially resulting in an overpayment—a violation of SSA's regulations. GAO recommends that SSA (1) consider ways to improve data and identify and monitor trends related to representatives, and (2) enhance coordination with states, counties, and other third parties with the goal of improving oversight and preventing potential overpayments. SSA partially agreed with our recommendations and noted that it may consider additional actions related to representatives.
Airports are a linchpin in the nation’s air transportation system. Adequate and predictable funding is needed for airport development. The National Civil Aviation Review Commission—established by Congress to determine how to fund U.S. civil aviation—reported in December 1997 that more funding is needed to develop the national airport system’s capacity, preserve small airports’ infrastructure, and fund new safety and security initiatives. Funding is also needed to mitigate the noise and other negative environmental effects of airports on nearby communities. Airports provide important economic benefits to the nation and their communities. Air transportation accounted for $63.2 billion, or 0.8 percent, of U.S. Gross Domestic Product in 1996, according to the Department of Transportation’s statistics. 1.6 million people are employed at airports in 1998, according to the Airports Council International-North America. In our own study of airport privatization in 1996, we found that the 69 largest U.S. airports had 766,500 employees (686,000 private and 80,500 public employees). In 1996, tax-exempt bonds, the Airport Improvement Program (AIP), and passenger facility charges (PFC) together provided about $6.6 billion of the $7 billion in airport funding. State grants and airport revenue contributed the remaining funding for airports. Table 1 lists these sources of funding and their amounts in 1996. The amount and type of funding varies with airports’ size. The nation’s 71 largest airports (classified by FAA as large hubs and medium hubs), which accounted for almost 90 percent of all passenger traffic, received more than $5.5 billion in funding in 1996, while the 3,233 other national system airports received about $1.5 billion. As shown in figure 1, large and medium hub airports rely most heavily on private airport bonds, which account for roughly 62 percent of their total funding. By contrast, the 3,233 smaller national system airports obtained just 14 percent of their funding from bonds. For these smaller airports, AIP funding constitutes a much larger portion of their overall funding—about half. Airports’ planned capital development over the period 1997 through 2001 may cost as much as $10 billion per year, or $3 billion more per year than in 1996. Figure 2 compares airports’ total funding for capital development in 1996 with their annual planned spending for development. Funding for 1996, the bar on the left, is shown by source (AIP, PFCs, state grants, and operating revenues). Planned spending for future years, the bar on the right, is shown by the relative priority FAA has assigned to the projects, as follows: Reconstruction and mandated projects, FAA’s highest priorities, total $1.4 billion per year and are for projects to maintain existing infrastructure (reconstruction) or to meet federal mandates, including safety, security, and environmental requirements, including noise mitigation requirements. Other high-priority projects, primarily adding capacity, account for another $1.4 billion per year. Other AIP-eligible projects, a lower priority for FAA, such as bringing airports up to FAA’s design standards, add another $3.3 billion per year for a total of $6.1 billion per year. Finally, airports anticipate spending another $3.9 billion per year on projects that are not eligible for AIP funding, such as expanding commercial space in terminals and constructing parking garages. Other high-priority projects $1,360 Reconstruction & mandates $1,414 Planned development 1997 through 2001 (annualized) Within this overall picture of funding and planned spending for development, it is difficult to develop accurate estimates of the extent to which AIP-eligible projects are deferred or canceled because some form of funding cannot be found for them. FAA does not maintain information on whether eligible projects that do not receive AIP funding are funded from other sources, deferred, or canceled. We were not successful in developing an estimate from other information sources, mainly because comprehensive data are not kept on the uses to which airport and special facility bonds are put. But even if the entire bond financing available to smaller airports were spent on AIP-eligible projects, these airports would have, at a minimum, about $945 million a year in AIP-eligible projects that are not funded. Conversely, if none of the financing from bonds were applied to AIP-eligible projects, then the full $3 billion funding shortfall would apply to these projects. The difference between current and planned funding for development is bigger, in percentage terms, for smaller airports than for larger ones. Funding for the 3,233 smaller airports in 1996 was a little over half of the estimated cost of their planned development, producing a difference of about $1.4 billion (see fig. 3). This difference would be even greater if it were not for $250 million in special facility bonding for a single cargo/general aviation airport. For this group of airports, the $782 million in 1996 AIP funding exceeds the annual estimate of $750 million for FAA’s highest-priority projects—those involving reconstruction, noise mitigation, and compliance with federal mandates. However, there is no guarantee that the full amount of AIP funding will go only to the highest-priority projects, because one-third of AIP funds are awarded to airports on the basis of the number of passengers boarding commercial flights and not necessarily on the basis of projects’ priority. Planned development 1997 through 2001 (annualized) As a proportion of total funding, the potential funding difference between 1996 funding and planned development for the 71 large and medium hub airports is comparatively less than for their smaller counterparts (see fig. 3 and fig. 4). Larger airports potential shortfall of $1.5 billion represents 21 percent of their planned development costs, while smaller airports’ potential shortfall of $1.4 billion represents 48 percent of their development costs. Therefore, while larger and smaller airports’ respective shortfalls are similar in size, the greater scale of larger airports’ planned development causes them to differ considerably in proportion. Figure 4 also indicates that $590 million in AIP funding falls $74 million short of the estimated cost to meet FAA’s highest priorities for development—reconstruction, noise mitigation, and compliance with federal mandates. Planned development 1997 through 2001 (annualized) Proposals to increase airport funding or make better use of existing funding vary in the extent to which they would help different types of airports and close the gap between funding and the costs of planned development. For example, increasing AIP funding would help smaller airports more because current funding formulas would channel an increasing proportion of AIP to smaller airports. Conversely, any increase in PFC funding would help larger airports almost exclusively because they handle more passengers and are more likely to have a PFC in place. Changes to the current design of AIP or PFCs could, however, lessen the concentration of benefits to one group of airports. FAA has also used other mechanisms to better use and extend existing funding sources, such as letters of intent, state block grants, and pilot projects to test innovative financing. So far, these mechanisms have had mixed success. Under the existing distribution formula, increasing total AIP funding would proportionately help smaller airports more than large and medium hub airports. Appropriated AIP funding for fiscal year 1998 was $1.7 billion; large and medium hub airports received nearly 40 percent and all other airports about 60 percent of the total. We calculated how much funding each group would receive under the existing formula, at funding levels of $2 billion and $2.347 billion. We chose these funding levels because the National Civil Aviation Review Commission and the Air Transport Association (ATA), the commercial airline trade association, have recommended that future AIP funding levels be stabilized at a minimum of $2 billion annually, while two airport trade groups—the American Association of Airport Executives and the Airports Council International-North America—have recommended a higher funding level, such as AIP’s authorized funding level of $2.347 billion for fiscal year 1998. Table 2 shows the results. As indicated, smaller airports’ share of AIP would increase under higher funding levels if the current distribution formula were used to apportion the additional funds. Increasing PFC-based funding, as proposed by the Department of Transportation and backed by airport groups, would mainly help larger airports, for several reasons. First, large and medium hub airports, which accounted for nearly 90 percent of all passengers in 1996, have the greatest opportunity to levy PFCs. Second, such airports are more likely than smaller airports to have an approved PFC in place. Finally, large and medium hub airports would forego little AIP funding if the PFC ceiling were raised or eliminated. Most of these airports already return the maximum amount that must be turned back for redistribution to smaller airports in exchange for the opportunity to levy PFCs. If the airports currently charging PFCs were permitted to increase them beyond the current $3 ceiling, total collections would increase from the $1.35 billion that FAA estimates was collected during 1998. Most of the additional collections would go to larger airports. For every $1 increase in the PFC ceiling, we estimate that large and medium hub airports would collect an additional $432 million, while smaller airports would collect an additional $46 million (see fig. 5). In total, a $4 PFC ceiling would yield $1.9 billion, a $5 PFC would yield $2.4 billion, and a $6 PFC would yield $2.8 billion in total estimated collections. Increased PFC funding is likely to be applied to different types of projects than would increased AIP funding. Most AIP funding is applied to “airside” projects like runways and taxiways. “Landside” projects, such as terminals and access roads, are lower on the AIP priority list. However, for some airports, congestion may be more severe at terminals and on access roads than on airfields, according to airport groups. The majority of PFCs are currently dedicated to terminal and airport access projects and interest payments on debt, and any additional revenue from an increase in PFCs may follow suit. In recent years, the Congress has directed FAA to undertake other steps designed to allow airports to make better use of existing AIP funds. Thus far, some of these efforts, such as letters of intent and state block grants, have been successful. Others, such as pilot projects to test innovative financing and privatization, have received less interest from airports and are still being tested. Finally, one idea, using AIP grants to capitalize state revolving loan funds, has not been attempted but could help small airports. Implementing this idea would require legislative changes. Letters of intent are an important source of long-term funding for airport capacity projects, especially for larger airports. These letters represent a nonbinding commitment from FAA to provide multiyear funding to airports beyond the current AIP authorization period. Thus, the letters allow airports to proceed with projects without waiting for future AIP grants and provide assurance that allowable costs will be reimbursed. Airports may also be able to receive more favorable interest rates on bonds that are sold to finance a project if the federal government has indicated its support for the project in a letter of intent. For a period, FAA stopped issuing letters of intent, but since January 1997, it has issued 10 letters with a total funding commitment of $717.5 million. Currently, FAA has 28 open letters committing a total of $1.180 billion through 2010. Letters of intent for large and medium airports account for $1.057 billion, or 90 percent, of that total. Airports’ demand for the letters continues—FAA expects at least 10 airports to apply for new letters of intent in fiscal year 1999. In 1996, we testified before this Subcommittee that FAA’s state block grant pilot program was a success. The program allows FAA to award AIP funds in the form of block grants to designated states, that, in turn, select and fund AIP projects at small airports. States then decide how to distribute these funds to small airports. In 1996, the program was expanded from seven to nine states and made permanent. Both FAA and the participating states believe that they are benefiting from the program. In recent years, FAA, with congressional urging and direction, has sought to expand airports’ available capital funding through more innovative methods, including the more flexible application of AIP funding and efforts to attract more private capital. The 1996 Federal Aviation Reauthorization Act gave FAA the authority to test three new uses for AIP funding—(1) projects with greater percentages of local matching funds, (2) interest costs on debt, and (3) bond insurance. In all, these three innovative uses could be tested on up to 10 projects. Another innovative financing mechanism that we’ve recommended—using AIP funding to help capitalize state airport revolving funds—while not currently permitted, may hold some promise. FAA is testing 10 innovative uses of AIP funding totaling $24.16 million, all at smaller airports. Five projects tested the benefits of the first innovative use of AIP funding—allowing local contributions in excess of standard matching amount, which for most airports and projects is otherwise fixed at 10 percent of the AIP grant. FAA and state aviation representatives generally support the concept of flexible matching because it allows projects to begin that otherwise might be postponed for lack of sufficient FAA funding; in addition, flexible funding may ultimately increase funding to airports. The latter five projects test the other two mechanisms for innovative financing. Applicants have generally shown less interest in the latter two options, which, according to FAA officials, warrant further study. Some federal transportation, state aviation, and airport bond rating and underwriting officials believe using AIP funding to capitalize state revolving loan funds would help smaller airports obtain additional financing. Currently, FAA cannot use AIP funds for this purpose because AIP construction grants can go only to designated airports and projects. However, state revolving loan funds have been successfully employed to finance other types of infrastructure projects, such as wastewater projects and, more recently, drinking water and surface transportation projects. While loan funds can be structured in various ways, they use federal and state moneys to capitalize the funds from which loans are then made. Interest and principal payments are recycled to provide additional loans. Once established, a loan fund can be expanded through the issuance of bonds that use the fund’s capital and loan portfolio as collateral. These revolving funds would not create any contingent liability for the U.S. government because they would be under state control. Declining airport grants and broader government privatization efforts spurred interest in airport privatization as another innovative means of bringing more capital to airport development, but thus far efforts have shown only limited results. As we previously reported, the sale or lease of airports in the United States faces many hurdles, including legal and economic constraints. As a way to test privatization’s potential, the Congress directed FAA to establish a limited pilot program under which some of these constraints would be eased. Starting December 1, 1997, FAA began accepting applications from airports to participate in the pilot program on a first-come, first-served basis for up to five airports. Thus far, two airports have applied to be part of the program. Mr. Chairman, this concludes our prepared statement. We would be happy to respond to any questions that you or Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed airport funding issues, focusing on: (1) the amount airports are spending on capital development and the sources of those funds; (2) comparing airports' plans for development with current funding levels; and (3) what effect will various proposals to increase or make better use of existing funding have on airports' ability to fulfill their capital development plans. GAO noted that: (1) 3,304 airports that make up the federally supported national airport system obtained about $7 billion from federal and private sources for capital development; (2) more than 90 percent of this funding came from three sources: tax-exempt bonds issued by states and local airport authorities, federal grants from the Federal Aviation Administration (FAA) Airport Improvement Program (AIP), and passenger facility charges paid on airline tickets; (3) the magnitude and type of funding varies with airports' size; (4) the nation's 71 largest airports accounted for nearly 80 percent of the total funding; (5) airports planned to spend as much as $10 billion per year for capital development for the years 1997 through 2001, or $3 billion per year more than they were able to fund in 1996; (6) the difference between funding and the costs of planned development is greater for smaller commercial and general aviation airports than for their larger counterparts; (7) smaller airports' funding would cover only about half the costs of their planned development, while larger airports' funding would cover about 4/5 of their planned development; (8) airports' planned development can be divided into four main categories based on the funding priorities of AIP; (9) about $1.4 billion per year was planned for safety, security, environmental, and reconstruction projects, FAA's highest priorities for AIP funding; (10) another $1.4 billion per year was planned for projects FAA regards as the next highest priority, primarily adding airport capacity; (11) other projects FAA considers to be lower in priority, such as bringing airports up to FAA's design standards, add another $3.3 billion per year; (12) airports anticipated spending another $3.9 billion per year on projects that are not eligible for AIP funding, such as expanding commercial space in terminals and constructing parking garages; (13) several proposals to increase or make better use of existing funding have emerged in recent years, including the amount of AIP funding and raising the maximum amount airports can levy in passenger facility charges; (14) under current formulas, increasing the amount of AIP funding would help small airports more than larger airports, while raising passenger facility charges would help larger airports more; and (15) other initiatives, such as AIP block grants to states, have had varied success, but none appears to offer a major breakthrough in reducing the shortfall between funding and airports' plans for development.
The FUDS program is carried out by 22 Corps districts located throughout the nation. DOD carries out its roles and responsibilities in cleaning up FUDS primarily under the Defense Environmental Restoration Program, which was established by section 211 of the Superfund Amendments and Reauthorization Act of 1986. Under the environmental restoration program, DOD is authorized to identify, investigate, and clean up environmental contamination at FUDS. The U.S. Army, through the Corps, is responsible for these activities and is carrying out the physical cleanup. DOD is required, under the Defense Environmental Restoration Program, to consult with the Environmental Protection Agency (EPA), which has its own authority to act at properties with hazardous substances. In general, EPA is the primary regulator for the 21 FUDS properties on EPA’s list of the most dangerous hazardous waste sites in the country—the National Priorities List. States are typically the primary regulators for FUDS properties that have hazardous and other wastes but have not been placed on the National Priorities List. To determine if a property is eligible for cleanup under the FUDS program, the Corps conducts a preliminary assessment of eligibility. This assessment determines if the property was ever owned or controlled by DOD and if hazards caused by DOD’s use may be present. If the Corps determines that the property was at one time owned or controlled by DOD but does not find evidence of any hazards caused by DOD, it designates the property as “no DOD action indicated” (NDAI). If, however, the Corps determines that a DOD-caused hazard that could require further study may exist on a former DOD-controlled property, the Corps begins a project to further study and/or clean up the hazard. FUDS cleanup projects fall into one of four categories, depending on the type of hazard to be addressed. Hazardous waste projects address hazardous, toxic, and radioactive substances, such as paints, solvents, and fuels. Containerized waste projects address containerized hazardous, toxic, and radioactive waste associated with underground and aboveground storage tanks, transformers, hydraulic systems, and abandoned or inactive monitoring wells. Ordnance and explosive waste projects involve munitions, chemical warfare agents, and related products. Unsafe buildings and debris projects involve demolition and removal of unsafe buildings and other structures. The type and extent of the work that the Corps may need to perform at a project depend on the project category. Hazardous waste and ordnance and explosive waste projects involve a site inspection to confirm the presence, extent, and source of hazards; a study of cleanup alternatives; the design and implementation of the actual cleanup; and long-term monitoring to ensure the success of the cleanup. Containerized waste and unsafe buildings and debris projects, on the other hand, may involve only the design and implementation of the cleanup. While federal law requires DOD and the Corps to consult with regulators, including states and EPA, during the FUDS cleanup program, it does not define consultation. Similarly, the two primary DOD and Corps guidance documents for implementing the FUDS program emphasize the need for Corps coordination with regulators but do not provide clear direction or specific steps for involving regulators in the FUDS program. Our survey results show a lack of consistent coordination between the Corps and regulators throughout the history of the program that could be caused by the lack of specific requirements that state explicitly what the Corps needs to do to involve regulators. According to DOD, ongoing development of regulations that will revise the Corps’ FUDS Program Manual will provide clear direction and specific steps for involving regulators in the FUDS program. Federal law requires DOD and the Corps to consult with regulatory entities in carrying out the FUDS program. Under 10 U.S.C. 2701, the Corps must carry out the FUDS program “in consultation with” EPA. However, this section does not define consultation, mention the state regulators, or prescribe specific steps for the Corps to follow. More specific language regarding consultation as it relates to the cleanup of hazardous substances is provided in 10 U.S.C. 2705. At projects involving hazardous substances, the Corps must notify EPA and appropriate state officials and provide them an opportunity to review and comment on activities associated with (1) discovering releases or threatened releases of hazardous substances at FUDS, (2) determining the extent of the threat to public health and the environment that may be associated with such releases, (3) evaluating proposed cleanup actions, and (4) initiating each distinct phase of cleanup. In addition, CERCLA has specific consultation requirements for properties on the National Priorities List, including the 21 FUDS on the list for which EPA is the primary regulator. For many of these FUDS, EPA and DOD have signed a cleanup agreement stating that the two agencies agree on the nature of the cleanup action and the schedule for its completion. DOD and the Corps have two major guidance documents for implementing the FUDS program: the DOD Management Guidance for the Defense Environmental Restoration Program and the FUDS Program Manual. The DOD Management Guidance pertains to all DOD environmental cleanup activities, including FUDS cleanup. It contains general guidance for the Corps’ coordination activities. According to the guidance, DOD is fully committed to the substantive involvement of state regulators and EPA throughout the FUDS cleanup program and encourages cooperative working relationships. The latest version of the guidance, published in September 2001, emphasizes a greater need for coordination with regulators. For example, the guidance states that the Corps shall establish communication channels with regulatory agencies; provide regulators access to information, including draft documents; establish procedures for obtaining pertinent information from regulators in a timely manner; and involve regulatory agencies in risk determination, project planning, completion of cleanup activities, and other tasks. Although the updated DOD Management Guidance articulates general steps that, if taken, would improve coordination between the Corps and regulatory agencies, the guidance does not specify procedures on how to take these steps. Further, some of the language is ambiguous and open to broad interpretation. For example, “establish communication channels” could mean anything from a telephone call once a year to weekly meetings. The second guidance document, the FUDS Program Manual, constitutes the Corps’ primary guidance for the program. Regarding coordination, the manual suggests, and sometimes requires, among other things, that the Corps notify states and EPA of discovery and cleanup activities related to ensure that states and EPA have adequate opportunity to participate in selecting and planning cleanup actions and in defining cleanup standards for FUDS projects; coordinate all cleanup activities with appropriate state regulatory and conduct cleanups of hazardous waste projects consistent with section 120 of CERCLA, which addresses cleanups of federal facilities; and try to meet state and EPA standards, requirements, and criteria for environmental cleanup where they are consistent with CERCLA. Beyond generally restating statutory requirements, however, the FUDS Program Manual provides no clear, specific guidance to its program managers on how to implement those steps and coordinate consistently with regulators. For example, “coordinate all cleanup activities” needs to be defined and how to carry out and maintain such coordination on a day- to-day basis should be described more clearly. According to DOD and Corps officials, the draft Engineer Regulation that is being developed to revise the FUDS Program Manual includes specific instructions for review of draft preliminary assessments of eligibility by regulators. Officials added that they are open to further suggestions to improve coordination and consultation with regulators. Although coordination is required during the cleanup phase for hazardous and containerized wastes, responses to our survey of FUDS properties covering FUDS work that took place during the period from 1986 through 2001 indicate that state project managers believe the Corps coordinated with them, on average, 34 percent of the time during cleanup, while the Corps believes it coordinated with states an average of 55 percent of the time during cleanup. Moreover, state and Corps respondents agree that coordination was better for projects in our sample that addressed hazardous substances than for projects that did not. For example, according to state respondents to our survey, coordination for hazardous waste projects was more than 25 percent higher than for ordnance and explosive waste projects. (See table 1.) For additional survey results, such as the percent of cases where respondents felt there wasn’t any coordination or gave “don’t know” responses, see appendix II. Despite the greater coordination for projects addressing hazardous substances, the Corps is not involving the states consistently. For example, for projects addressing hazardous substances, the Corps is required by law to inform states before starting each phase of any action and to provide states an opportunity to review and comment on proposed cleanup actions. However, according to the states, the Corps informed them of upcoming work at these hazardous waste projects 53 percent of the time and requested states’ input and participation 50 percent of the time. As shown in table 1, while the Corps thought it had coordinated at a higher rate, it was still less than the required 100 percent. The fact that DOD and Corps guidance does not offer specific requirements that describe exactly how the Corps should involve regulators could be a factor behind the historical lack of consistency in Corps coordination with regulators. The DOD Management Guidance and FUDS Program Manual are silent on regulators’ roles in preliminary assessments of eligibility, during which decisions on property eligibility and the need for cleanup are made, in part because the law requiring consultation with regulators is broad and does not mention consultation with the states, only with EPA. The Corps has historically regarded preliminary assessments of eligibility as an internal matter that does not require coordination with regulators. However, according to DOD, the draft Engineer Regulation, which will revise the FUDS Program Manual, will require the Corps to share information with the states, EPA, and local authorities during the development of the preliminary assessment of eligibility and will solicit their input. According to the results of our survey, the state project managers believe the Corps coordinated with them about 6 percent of the time, and the Corps project managers believe the Corps coordinated with states about 27 percent of the time. (See table 2.) As a result, there is no consistent coordination at this stage of the FUDS program. For additional survey results, such as the percent of cases where respondents felt there wasn’t any coordination or gave “don’t know” responses, see appendix II. Also, according to state and Corps respondents to our current survey, the Corps provided final reports on its preliminary assessments of eligibility to state regulators in 48 and 56 percent of the cases, respectively. In the past, states were only notified after the fact about the results of preliminary assessments of eligibility; however, the Corps said that although not required in its current guidance, its current practice is to coordinate all new preliminary assessments of eligibility with states. Subsequently, according to FUDS program officials in 12 of the 27 states we contacted, there has been some improvement in overall Corps coordination during the preliminary assessment of eligibility over the last 3 years. In particular, those states told us that while the Corps is still not required to coordinate with them during its preliminary assessments of eligibility, it has been doing a better job of providing them with draft and final reports on the outcomes of preliminary assessments of eligibility. Over approximately the last 3 years, states have noted an overall improvement in the Corps’ coordination with them. For example, FUDS program officials in 20 of the 27 states we contacted reported that, overall, Corps coordination with them has improved during this time. The main factors state officials cited for the improvement include an increase in the number of meetings they were invited to attend with Corps project managers on specific project tasks, more information provided by the Corps to the states regarding project work, and better coordination in setting work priorities. DOD and the Corps started to take steps to address the coordination issue in response to the concerns that the states began to voice in the late 1990s about their lack of involvement in the FUDS program. Initially, DOD’s efforts consisted of steps such as sponsoring conferences to encourage greater coordination between the Corps and regulators. Individual Corps districts also took steps to improve coordination. As part of the efforts to improve coordination, the Deputy Assistant Secretary of the Army for Environment, Safety and Occupational Health, along with members of the regulatory community, formed the FUDS Improvement Working Group in October 2000 to address FUDS program concerns and to improve communication among the Corps, the regulators, and other parties with an interest in FUDS cleanup. The working group, which consisted of DOD, Corps, state, EPA, and tribal representatives, compiled a list of issues to be addressed through better communication and consistent coordination, including the role of regulators in setting priorities and planning work at FUDS properties and in the final closeout of properties after cleanup. Two results of the working group’s efforts to improve coordination are new Army guidance and a pilot program. First, in April 2001, Army headquarters sent a memorandum to Corps divisions and districts responsible for FUDS work requiring them to follow specific steps when dealing with regulators during the FUDS cleanup program. For example, the memorandum required the Corps to inform states of FUDS that are likely to go through a preliminary assessment of eligibility, provide states with updated lists of all ongoing and future activities at involve states in setting priorities for FUDS work, provide states a final list of FUDS that will undergo some type of work in the coming year, inform states of any Corps deviation from planned work and provide them with the rationale for any such changes, and involve states in developing the final report of the preliminary assessment of eligibility. The Corps considers this directive to be a first step in improving the states’ somewhat negative perceptions of the FUDS program and overall communication between the Corps and the states. The directive addresses many state concerns, including lack of information about which FUDS properties the Corps is working on, involvement in and information about preliminary assessments of eligibility and their outcomes, and state regulatory involvement in setting priorities for Corps FUDS work. However, after almost 2 years, the memo’s conclusions have not been incorporated in either DOD’s Management Guidance or the Corps’ FUDS Program Manual. According to DOD, the Corps is now in the process of revising the FUDS Program Manual as an Engineer Regulation to include specific requirements for Corps district coordination with EPA and state regulators. The second result from the working group is a pilot program developed by the Army in March 2001 under which the Corps and regulatory agencies, including states and EPA, jointly prepare statewide Management Action Plans for FUDS properties. Specifically, for each state participating in the pilot, information provided by EPA, state regulators, and other relevant parties is consolidated on each FUDS property in the state to prepare a statewide Management Action Plan. Each state plan provides a coordinated strategy for investigating and cleaning up FUDS that identifies the key participants and their roles at FUDS cleanups, provides an inventory of all FUDS located in the state, sets priorities for cleaning up FUDS properties and projects, and develops statewide work plans. Overall state reaction to this pilot has been favorable. FUDS project managers in 19 of the 27 states that we contacted believe that this pilot will improve future communication between the Corps and the states. To date, the four states that participated in the initial phase of the pilot— Colorado, Kansas, Ohio, and South Dakota—have statewide plans. The plans’ approaches vary to address each state’s unique circumstances. For example, the Kansas plan was very detailed, covering the status of state and federal environmental programs, the status of the FUDS program, and providing details about Kansas FUDS properties. Conversely, the South Dakota and Colorado plans focused only on regulator and budget issues. Corps officials stated that they receive input from state representatives of organizations in the working group regarding whether the pilot has been successful. Recognizing that the variation in state approaches as to how these Management Action Plans are developed might be appropriate, DOD says that it plans to work with the FUDS Improvement Working Group to evaluate the success of the pilot and determine best practices that could be shared with the nine additional states that participated in the second round of the pilot during fiscal year 2002: Alaska, Arizona, Massachusetts, Missouri, North Carolina, South Carolina, Texas, Virginia, and Wyoming. DOD views the pilot as a success and plans to continue the development of statewide Management Action Plans for an additional six states during fiscal year 2003, including Alabama, Hawaii, Michigan, New Mexico, New York, and Washington. As part of this effort, DOD plans to develop a format that meets the needs of each particular state. Corps officials stated that the Corps will highlight the minimum elements that must be in a Statewide Management Action Plan but will not dictate the plan’s exact format. In addition to the DOD and Corps efforts taken to improve coordination, individual Corps districts also took steps to improve coordination with the states in which they operate, as follows: The Alaska district began sharing with state regulators backup documents related to its preliminary assessments of eligibility and inviting regulators to accompany district officials on site visits during the preliminary assessments of eligibility. The Alaska district now also involves state regulators in developing work plans and is in the process of establishing formal procedures to achieve project and property closeouts that are jointly agreed upon by the Corps and the state. The Louisville district, in response to state concerns, began to reassess its previous NDAI determinations at Nike missile sites. Since 1998, the Kansas City district has been holding quarterly meetings with states and EPA to establish lines of communication between the Corps and regulators; the district has also entered into memorandums of agreement with states and EPA outlining roles and responsibilities for each. The Fort Worth district invited interested parties, including officials from another district and state regulators, to its June 2001 meeting to set priorities and plan FUDS work for the upcoming year. The Honolulu district and EPA Region 9 cochair meetings semi- annually to foster communication on the FUDS program in the Pacific area. The Baltimore district provided electronic copies of all preliminary assessment of eligibility reports to Delaware, Maryland, Pennsylvania, and Washington, D.C., in 1999; similarly, the Norfolk district provided most, if not all, such reports to the state of Virginia. While these individual district efforts may yield positive results, the Corps has not assessed these efforts to determine if any might be candidates for Corps-wide implementation. The Corps believes it is a best practice to allow individual districts and regulators to work out mutually agreed to levels of coordination. However, without adequate guidance, direction, and a menu of best practices for districts to choose from, inconsistent and inadequate coordination may result. To better promote greater and more consistent coordination with regulators, DOD and the Corps will need to assess the success of individual district efforts to determine which lessons learned from these activities should be included in program guidance. Some state regulators, who are responsible for ensuring that applicable environmental standards are met at most FUDS properties, believe that inadequate Corps coordination has made it more difficult for them to carry out their regulatory responsibilities. Also, state regulatory officials told us that they have frequently questioned Corps cleanup decisions because they have often not been involved in or informed about Corps actions at FUDS. Conversely, they told us that when Corps coordination has occurred, states have been more likely to agree with Corps decisions. At the federal level, EPA and the Corps do not share the same view on EPA’s role in the FUDS program. EPA believes that it should play a greater role at the 9,000 FUDS that are not on the National Priorities List, while the Corps believes that EPA’s role should remain limited to those FUDS that are on the National Priorities List. Some state regulators we contacted believe that when the Corps does not inform them of its FUDS cleanup activities or involve them in the various stages of the FUDS program, they do not have the information necessary to ensure that applicable cleanup standards have been met and that the cleanup actions will protect human health and the environment. They were particularly concerned about the preliminary assessment of eligibility stage of the program and hazards such as ordnance and explosive waste, for which the requirement in law (10 U.S.C. 2701) “consultation with EPA” is very broad and without definition. Further, the law does not mention consultation or coordination with state regulators. Discussions with state regulators raised the issue that coordination through all stages of the program was valuable and helped regulators develop confidence in Corps decisions. With regard to the preliminary assessment of eligibility, FUDS program officials in 15 of the 27 states we contacted expressed specific concerns regarding their limited involvement during this stage of the program. One concern, which was raised by 12 of these officials, was that Corps activities are taking place without their knowledge or involvement. Our past work has shown the results of this lack of coordination. Our August 2002 report noted that because the Corps historically did not consult states during its preliminary assessment of eligibility, states did not discover until after the fact, in some cases years later, that the Corps had determined that more than 4,000 properties required no further DOD study or cleanup action. Moreover, in several cases in which DOD had made an NDAI determination without involving the states, DOD-caused hazards were later identified, and the Corps had to reassess the properties and conduct cleanup work. At Camp O’Reilly in Puerto Rico, for example, the Corps made an NDAI determination after it conducted a preliminary assessment of eligibility that did not include a review of state historical information on the use of the property. Several years later, the then-owner of the property identified DOD-caused hazards at the property. This led to a more comprehensive Corps assessment that found serious threats to drinking water sources and other hazards that required cleanup under the FUDS program. Another concern about the preliminary assessment of eligibility voiced by officials in 17 of the 27 states we contacted is that the Corps has not adequately supported and documented its NDAI decisions, and it has not involved states in developing them. Because of their lack of involvement and what states perceive as a lack of adequate support for such Corps decisions, these states believe they have little assurance that the Corps performed adequate work during its preliminary assessments of eligibility and that NDAI properties are, in fact, free of DOD-caused hazards. Our survey of 519 FUDS properties also showed that, historically, states approved of Corps NDAI determinations in only 10 percent of the cases; in 70 percent of the cases, state respondents could not say whether they agreed or disagreed with the determination. With regard to ordnance and explosive waste projects, one of the types of projects states told us were most important to them, interviews with the 27 state FUDS program officials indicated that they were satisfied with the Corps’ work on such projects in only 11 percent of the cases. This lack of satisfaction could be, at least partially, the result of the relatively low levels of state involvement in these projects. According to state survey respondents, the Corps involved them, on average, in 23 percent of ordnance and explosive waste projects. Corps guidance currently focuses coordination on hazardous waste and does not specifically address coordination of ordnance and explosive waste projects. However, according to DOD, the draft Engineer Regulation that revises the FUDS Program Manual includes specific requirements for district coordination with regulators on such projects. States also have various concerns about their limited involvement in the FUDS work that occurs after the preliminary assessment of eligibility. For example, FUDS program officials in 7 of the 27 states believe that being more involved in setting priorities for the Corps’ project work could help ensure that riskier sites were addressed in a timely manner. Further, officials in 9 of the states we contacted said that when they are not involved in project and property closeouts—the points at which the Corps concludes that all its cleanup work has been completed—state regulatory agencies have no assurance that Corps actions have met state cleanup requirements. Finally, when the Corps has coordinated with states, states have been less likely to doubt the validity of Corps decisions and the adequacy of Corps cleanup activities. According to our survey results, for example, when states received final reports from the Corps, they agreed with Corps decisions regarding the risk posed by a hazard, the characteristics of the site, and the cleanup standards selected in 53 percent of the cases and disagreed in only 13 percent. On the other hand, when states did not receive such documentation, they agreed with Corps decisions in only 11 percent of the cases, disagreed in 15 percent, and did not know enough to offer an opinion in 74 percent of the cases. Similarly, according to some state FUDS program officials, as Corps coordination with states has improved over the past 3 years, states’ acceptance of Corps decisions has increased. For example, only one of the 27 state FUDS program officials we contacted generally agreed with Corps NDAI decisions that were made before the last 3 years. On the other hand, eight of these officials told us that they agree with recent NDAI decisions that were made during the last 3 years. EPA has historically had little involvement in the cleanup of the approximately 9,000 FUDS that are not on its National Priorities List and for which EPA is usually not the primary regulator. In the late 1990s, at the request of some states, tribes, members of the general public, and others, EPA increased its focus on environmental investigations and cleanups of privately owned FUDS. In some cases, this has led to disagreements between EPA and the Corps and required added efforts on the parts of both agencies to reach agreement on how cleanup should be conducted. As EPA’s knowledge of the FUDS program and how it is carried out by the Corps grew, EPA focused its attention on various issues, including the following: EPA, the Corps, and state regulators all have differing views of EPA’s role at FUDS that are not on the National Priorities List. EPA believes that, in certain instances, it should have a greater role at FUDS that are not on the National Priorities List. DOD, citing its statutory responsibility to carry out the FUDS program and a delegation of CERCLA authority under an executive order, maintains that it is the sole administrator of the FUDS program. States, which are responsible for regulating cleanup at most FUDS, have varying opinions on what EPA’s role in FUDS cleanup should be. Several states would like to see EPA become more involved in the cleanup process, for example, by participating in preliminary assessments of eligibility or providing states with funds to review Corps work. Other states believe EPA’s role is about right or that EPA has no role in the process unless a state invites it to participate. The way the Corps is to administer the FUDS cleanup program has also been interpreted differently by the agencies. Specifically, 10 U.S.C. 2701 requires that the Corps perform work at FUDS projects involving hazardous substances “subject to and in a manner consistent with” section 120 of CERCLA, which addresses the cleanup of federal facilities. Section 2701 also requires the Corps to carry out response actions involving hazardous substances in accordance with the provisions of the Defense Environmental Restoration Program and CERCLA. However, EPA and the Corps disagree on the meaning of these requirements. EPA contends that the Corps should follow CERCLA regulations (the National Contingency Plan) and the EPA guidance used to clean up non-FUDS properties under CERCLA. DOD maintains its right to establish and follow its own procedures for determining project eligibility under the Defense Environmental Restoration Program, as long it performs response actions in a manner consistent with its authorities under the Defense Environmental Restoration Program and CERCLA. EPA believes that DOD’s preliminary assessments of eligibility should be as comprehensive as the preliminary assessments that EPA conducts on non-FUDS properties. EPA’s CERCLA-based preliminary assessments investigate entire properties for hazards, identifying the source and the nature of hazards and the associated risks to human health and the environment—information EPA needs to determine whether properties qualify for placement on the National Priorities List. In contrast, DOD’s preliminary assessments of eligibility focus on determining whether the properties are eligible for cleanup under the FUDS program and whether DOD-caused hazards may exist. According to DOD, it collects information limited to DOD-related hazards in accordance with the limits of its authorities under the Defense Environmental Restoration Program. The FUDS Program Manual states that DOD’s preliminary assessment of eligibility is not intended to be equivalent to the CERCLA preliminary assessment. DOD officials said that the draft Engineer Regulation, which revises the FUDS Program Manual, addresses EPA concerns about coordination during the preliminary assessment of eligibility. DOD views preliminary assessments of eligibility as internal agency documents for which there is no coordination requirement and has generally not coordinated these assessments with EPA. As a result, according to EPA officials, EPA often does not have access to the information necessary for deciding whether a property should be included on the National Priorities List. Consequently, EPA cannot be assured that significant hazards to human health and the environment that could warrant listing do not exist at a property, and EPA may need to conduct its own, more comprehensive, preliminary assessment under CERCLA. Because of its focus on these issues, EPA re-evaluated its approach to addressing privately owned FUDS, and, in March 2002, issued a policy for addressing privately owned FUDS that are not on the National Priorities List. The policy, issued to EPA’s regional offices to clarify the agency’s role at these FUDS, outlines a framework for coordinating with the Corps and EPA’s expectations for Corps consultation with them under the Defense Environmental Restoration Program. For example, EPA would like to see the Corps involve it to a greater extent in FUDS work, such as preliminary assessments of eligibility; provide EPA, state regulatory agencies, and other interested parties reasonable opportunities for meaningful review of and comment on major decision documents, as well as documents associated with carrying out specific FUDS activities, such as work plans and sampling and analysis plans; and respond in writing to comments from EPA, the states, and others and show how it has addressed the comments or, if it has not, explain why not. Overall, EPA believes that a better-coordinated effort among all parties, as discussed in its policy, would improve the effectiveness of cleanup at FUDS and increase public confidence in the actions taken at these sites. EPA’s policy also emphasizes that EPA does not expect its involvement to be consistent across all phases of work; rather, it would increase its involvement at a site when conditions warranted—for example, if there were “imminent and substantial endangerment” or if EPA had concerns about the appropriateness of the cleanup. DOD disagrees with much of EPA’s new policy. For example, in commenting on EPA’s draft policy, DOD requested that EPA delete from it numerous references to EPA’s “oversight” and “review.” DOD, citing its statutory responsibility to carry out the FUDS program and referring to a delegation of CERCLA authority under an executive order, maintains that the FUDS program is solely its program to administer. DOD also maintains that 10 U.S.C. 2701, which provides for EPA’s consultation role under the FUDS program, does not provide authority for EPA concurrence or oversight of the program. According to DOD, EPA’s role should be limited to FUDS for which EPA is the lead regulator—that is, primarily FUDS that are on the National Priorities List. Without an agreement on roles and responsibilities, DOD and EPA have been unable to establish an effective working relationship on FUDS or have had to undertake extra efforts to come to an agreement on how a cleanup should be conducted. An example of this is the Spring Valley FUDS in Washington, D.C., where the U.S. Army operated a research facility to test chemical weapons and explosives during World War I. Because the site was a formerly used defense site, DOD has responsibility for cleaning up the site under the Defense Environmental Restoration Program. However, under CERCLA, EPA has its own authority to act at the site, including conducting investigations and removal actions. Further, under EPA’s FUDS policy, EPA can take a more active role at FUDS if conditions warrant. According to EPA officials, if a site is not listed as a national priorities site or there is no imminent danger to the public or environment, EPA may limit its role. Early in the 1980s, the specific role of the two federal agencies at the Spring Valley site led to some confusion and disagreement about the cleanup approach and the standards to be applied. Over time, the federal agencies and the District of Columbia government formed a partnership to reach agreements on cleanup at the site. While the partners have not agreed on all cleanup decisions, they acknowledged, as of June 2002, that the partnership was operating effectively. Further, officials acknowledge that forming the partnership has provided a means to foster communication and collaboration. While state regulators reported to us that the Corps has improved its coordination with them, more can be done in five areas to build on those successes. First, our work has shown that many states would like to be more involved in the preliminary assessment of eligibility stage of the program. The program guidance is silent on regulators’ roles in preliminary assessments of eligibility, in part because the law requiring consultation with regulators is broad and does not mention consultation with the states, only with EPA. The Corps has regarded preliminary assessments of eligibility as an internal matter and has done little to coordinate with regulators during the assessment. As a result, regulators believe their ability to ensure that decisions about FUDS properties and projects meet environmental standards and protect the public from environmental contamination has been hindered. As we were completing our work, DOD and Corps officials told us that they are in the process of revising the FUDS Program Manual as an Engineer Regulation that would include requirements for coordination during preliminary assessments. Following through with this plan is critical to clearly establish that coordination is required and lay out what steps need to be taken to ensure that it occurs. Second, as it updates its program guidance, incorporating the more specific requirements sent out in an April 2001 memorandum would help to ensure that coordination requirements are clear. Better clarity could also result from a re-examination and clarification of existing DOD and Corps FUDS program guidance documents that are general in nature and contain ambiguous language. Third, DOD and Corps efforts have been directed at improving coordination on hazardous waste projects but could be enhanced by also requiring coordination for ordnance and explosive wastes cleanup that can pose significant safety and health risks and in which many of the states want to be more involved. However, DOD states that it addresses coordination requirements at ordnance and explosive waste projects in its draft Engineer Regulation, which replaces the FUDS Program Manual. Fourth, while the Corps has made various agencywide efforts to improve coordination with regulators, such as its state management plans pilot program, many beneficial coordination efforts have also occurred at Corps districts through the initiative of individual Corps personnel. Evaluating these district efforts and agencywide initiatives to incorporate successful ones into its operating procedures for the FUDS program as a whole would establish best practices and result in the entire program benefiting from individual efforts. Finally, at the federal level, EPA and the Corps disagree about EPA’s role in the cleanup of more than 9,000 FUDS that are not on the National Priorities List. Reaching agreement on these roles and expectations for coordination is essential for establishing an effective working relationship on FUDS. The lack of a good working relationship between two federal cleanup agencies may hamper efforts to properly assess properties for cleanup and may, in some cases, result in some duplication of effort—for example, when EPA has to reassess the properties to determine if they merit placement on the National Priorities List. In addition, while the partnership formed by the two agencies at the Spring Valley FUDS demonstrates that the agencies can work together, that is not the norm for the FUDS program as evidenced by EPA’s March 2002 FUDS policy and DOD’s response to it. Further, even if the agencies were able to negotiate partnerships or memoranda of understanding for individual FUDS properties, that is neither an efficient nor cost effective approach given that there are thousands of FUDS properties needing cleanup. To help ensure consistent coordination with regulators during all phases of FUDS investigation and cleanup, we recommend that the Secretary of the Department of Defense direct the Secretary of the Department of the Army to follow through on its plans to develop and incorporate clear and specific guidance in the Corps’ FUDS Program Manual as to how, when, and to what extent coordination with regulators should take place, including during preliminary assessments of eligibility. Moreover, in view of the states’ concerns and hazards posed by ordnance and explosive waste, the coordination guidance should address these types of projects as well, not just those involving hazardous waste. In developing the guidance, the Army should work with regulators to develop a consensus on how, when, and to what extent coordination should take place. As a starting point, we recommend that the Secretary of the Department of Defense direct the Secretary of the Department of Army to assess the impact of the Corps’ recent efforts to improve coordination through actions such as directives and the Management Action Plan pilot program and incorporate the successful components as requirements into its FUDS Program Manual, and assess practices individual Corps districts have used to coordinate with regulators and develop a list of best practices for dissemination throughout the Corps that districts might use to improve their coordination. In addition, in view of the need for federal agencies to ensure that cleanup efforts are done properly and that scarce resources are best utilized, DOD and EPA should work together to clarify their respective roles in the FUDS cleanup program for properties that are not listed on the National Priorities List. The agencies should agree on a time frame to establish a memorandum of understanding that will lay out an overall framework for how they will work together, including their roles and responsibilities, during the assessment and cleanup of FUDS properties. We provided DOD and EPA with a draft of this report for their review and comment. DOD and EPA agreed with our findings and conclusions. In addition, DOD agreed with two of the report’s recommendations and partially agreed with the third, and indicated that it had begun or was planning on taking actions to address all of them. In response to our recommendation that DOD follow through on its plans to develop and incorporate clear and specific guidance in the FUDS Program Manual as to how, when, and to what extent coordination with regulators should take place, including during the preliminary assessment of eligibility phase and for ordnance and explosive waste projects, DOD indicated that it is in the process of addressing this issue. Specifically, the Corps is revising the FUDS Program Manual as an engineering regulation that will include step-by-step procedures for regulatory coordination at each phase of FUDS cleanup, including the preliminary assessment of eligibility process, and for unexploded ordnance projects. DOD also indicated that it is taking actions that should address our recommendations that DOD assess the impact of recent Corps’ efforts to improve coordination through actions such as the Management Action Plan pilot program and incorporate the successful components as requirements into its FUDS guidance. DOD is also assessing practices that individual Corps districts have used to coordinate with regulators and developing a list of best practices for dissemination and use throughout the Corps. DOD stated that it is proposing to include best practices from the Management Action Plan pilot in its engineering regulation and will review individual District efforts aimed at improving coordination with regulators to see if additional best practices should be developed. In response to our recommendation that DOD and EPA work together to clarify their respective roles in the FUDS cleanup program by establishing a memorandum of understanding that will lay out an overall framework, DOD is proposing to incorporate coordination and consultation requirements in the appropriate procedural sections of the upcoming engineering regulation, rather than using a memorandum of understanding. Overall, the steps being taken or planned by DOD to improve coordination with regulators could, when completed, constitute a significant improvement over current processes and should go a long way toward addressing the problems identified in this report that were the subject of our recommendations. EPA did not comment specifically on the individual recommendations in the report but did state that report did an excellent job of presenting substantive information relative to DOD’s efforts to consult with regulatory agencies. In addition to their written comments, DOD and EPA also provided a number of technical comments and clarifications, which we incorporated as appropriate. DOD’s comments appear in appendix III and EPA’s comments appear in appendix IV. We conducted our review from March 2001 to September 2002 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to the Secretary of Defense; the Director, Office of Management and Budget; appropriate congressional committees; and other interested parties. We will also provide copies to others upon request. In addition, the report will be available, at no charge, on the GAO Web site at http://www.gao.gov/. The objectives of our review were to (1) identify federal requirements for DOD and the Corps to coordinate with state and federal regulators during the FUDS cleanup program, (2) determine the extent to which the Corps has coordinated with state regulators since the start of the FUDS program and assess the recent steps it has taken to better coordinate, and (3) identify any concerns regulators may have about coordination with the Corps. To identify federal requirements that DOD and the Corps must meet in coordinating with regulators, we obtained and reviewed the Superfund Amendments and Reauthorization Act of 1986. To identify related DOD and Corps guidance, we interviewed FUDS program officials and Corps officials in various Corps districts and divisions. We then obtained and reviewed the guidance documents, including the Defense Management Guidance for the Defense Environmental Restoration Program, the Corps FUDS Program Manual, and other related documents. To determine how the Corps coordinates with state regulators during the assessment and cleanup of FUDS, we conducted a survey. First, we drew a stratified, random sample of 519 FUDS properties from the Corps’ FUDS database, as of February 2001. The survey results cover FUDS program activities that took place from 1986 through 2001. The sample consisted of 150 properties that did not have any projects associated with them and an additional 369 properties that had at least one project with at least one specific work phase completed. The following table summarizes our sample in terms of the number of properties represented, as well as the number and types of projects. We obtained information from the Corps’ FUDS database to customize the surveys depending on their cleanup phase as well as the types of projects, if any, that were in the survey. At the property level, questions varied depending upon whether 1) the Corps had determined that no DOD action was indicated, 2) the database showed no projects associated with the property and DOD had not made a determination that no DOD action was indicated, and 3) the Corps had proceeded with at least some type of project work. Project level questions varied depending on 1) the type of project—for example, hazardous waste projects received a more complex questionnaire than unsafe buildings and debris projects because hazardous waste projects must go through more investigation and cleanup phases and 2) how many of the investigation and cleanup phases the Corps had completed at a project—as indicated by the Corps FUDS database. For example, not all hazardous waste projects in our sample have gone through all applicable phases. Based on information that the Corps provided to us, we determined which phases were completed in such projects and only asked questions related to the completed phases. We then sent similar questionnaires to the current Corps and state project managers of the properties in our sample to obtain the views of both regarding coordination. To obtain information on DOD efforts to improve coordination with regulators and address their concerns, we interviewed DOD and Corps headquarters officials and reviewed documents that they provided. In addition, we contacted FUDS program officials at several Corps divisions and districts, including the Great Lakes and Ohio River, North Atlantic, South Atlantic, and Southwestern divisions, and the Alaska, Louisville, Norfolk, Seattle, and Tulsa districts. To obtain information on state regulators’ concerns regarding Corps coordination with them regarding the FUDS program, we conducted structured interviews with FUDS program managers in the 27 states that account for most of the FUDS work. To determine which states to call, we used the Corps FUDS database to identify the 20 states that had the greatest number of FUDS properties. Because properties vary in terms of the amount of work they involve—for example, the number of projects at FUDS properties ranged between 1 and 43—-we also identified the 20 states that had the most FUDS projects. There were 27 states that fell into at least one of these two categories, and they accounted for approximately 80 percent of all FUDS properties and all FUDS projects. To document consistently the information we obtained from the FUDS managers in the 27 states, we developed a data collection instrument to guide our interviews. To obtain information on the Corps’ coordination with EPA and its concerns regarding its role in the program, we interviewed officials at EPA headquarters, including those from the Office of Solid Waste and Emergency Response responsible for developing EPA’s guidance for FUDS, and we reviewed documentation they provided. In addition, we developed a data collection instrument to conduct structured interviews with federal facilities officials who deal with FUDS issues at all 10 EPA regions. Appendix II: State and Corps Project Managers’ Responses to Our Survey Regarding Coordination at FUDS Combined total responses for two questions. Combined total responses for two questions. Combined total responses for two questions. In addition to those named above, Gary L. Jones, Glenn C. Fischer, James Musial, and Pauline Seretakis made key contributions to this report. Also contributing to this report were Doreen S. Feldman, Art James, Nancy Crothers, and Laura Shumway.
The U.S. Army Corps of Engineers (Corps) is in charge of addressing cleanup at the more than 9,000 U.S. properties that were formerly owned or controlled by the Department of Defense (DOD) and have been identified as potentially eligible for environmental cleanup. The Corps has determined that more than 4,000 of these properties have no hazards that require further Corps study or cleanup action. However, in recent years, hazards have surfaced at some of these properties, leading state and federal regulators to question whether the Corps has properly assessed and cleaned up these properties. In this context, Congress asked us to (1) analyze federal coordination requirements that apply to the cleanup of these properties, (2) assess recent DOD and Corps efforts to improve coordination, and (3) identify any issues regulators may have about coordination with the Corps. Federal law requires DOD and the Corps of Engineers to consult with state regulatory agencies and EPA during the process of cleaning up formerly used defense sites (FUDS). However, the law only provides specifics for the cleanup phase for hazardous substances. DOD's Management Guidance and the FUDS Program Manual do not provide clear direction or specific steps for involving regulators in the FUDS program. In addition, both the law and the guidance are silent on the subject of consultation or coordination with regulators during the preliminary assessment phase, when the Corps makes decisions on whether a former defense site is eligible for DOD cleanup and whether further investigation and/or cleanup are needed. DOD and Corps officials told GAO that they would revise their guidance to include specific, but as yet undetermined, instructions for coordination with regulators during such decisions. DOD and the Corps have recently taken several steps to improve coordination. For example, they are working with the regulatory community to develop specific steps that Corps districts can take, such as providing states with updated lists of current and future FUDS program activities in their states and initiating a new pilot program in nine states that has the Corps working side by side with regulators in the cleanup of former defense sites. In addition, several Corps districts have independently taken steps to improve coordination with state regulators. DOD and the Corps will need to assess the effectiveness of these various initiatives to determine which are successful and should be included in program guidance to all districts. Despite the improvements in coordination, regulators still raised two major issues about Corps coordination on the FUDS program. First, some states believe that they lack the information necessary to properly oversee cleanup work at former defense sites and to judge the validity of Corps decisions. For example, 15 of the 27 states GAO contacted believe they need to be involved in knowing what the Corps is doing during the preliminary assessment phase. Also, 9 of the 27 states believe they need to be involved in project closeouts, so that they can ensure that the Corps has met state cleanup standards. Second, EPA believes it should have a larger role in the cleanup of former defense sites. Although states are the primary regulator at the majority of former defense sites and EPA is the primary regulator for only the 21 former defense sites that are on the list of the nation's worst hazardous sites, EPA believes that its role even on the unlisted sites should be greater. The agency believes that this would improve the effectiveness of the cleanups and increase public confidence overall. The Corps disagrees, and the two agencies have been unable to establish an effective working relationship on the cleanup for former defense sites. Commenting on a draft of this report, DOD stated that it generally agreed with the recommendations and was taking or planned to take steps that should, when completed, substantially correct the problems GAO cited.
VA employs approximately 10,000 physicians in its 158 medical centers. To help ensure that the care these physicians provide meets accepted professional standards, VA uses several systems to monitor and evaluate physician practice. These systems include surgical case review, external peer review, credentialing and privileging, malpractice claim analysis, and occurrence screens. An integral part of VA’s process is physician peer review—physicians evaluating the medical care provided by other physicians. Peer review in VA is used by medical centers to determine if practitioner care is less than optimal and is initiated when an occurrence screen identifies potential quality of care problems. Peer review is also used to establish the basis for the granting of privileges to physicians and to examine malpractice claims made against health care professionals in the medical center. No disciplinary action is taken against a physician’s privileges after a peer review following an occurrence screen. This is because quality assurance information, such as occurrence screen peer review data, is confidential and cannot be used in disciplinary proceedings. However, peer review findings can be used by medical center management to initiate a formal investigation of a physician’s performance or conduct after which disciplinary action can be taken. VA guidance, issued in April 1994, presents various methods for conducting peer review but does not mandate a specific peer review technique. Specifically, the guidance discusses the disadvantages of the single reviewer approach and presents three types of multiple reviewer techniques: (1) committee review, (2) multiple independent review, and (3) discussion to consensus. At the six medical centers we visited, two methods of peer review were being utilized: multiple independent review and committee review. (See app. II for a discussion of these approaches.) Regardless of the approach used, the result of any peer review is an evaluation of the care provided by a practitioner and a preliminary determination as to how, in the reviewer’s opinion, other physicians would have handled the case. Cases rated as a level 1 (most experienced, competent practitioners would handle case similarly) usually receive no further action. Cases rated as a level 2 (most experienced, competent practitioners might handle the case differently) or a level 3 (most experienced, competent practitioners would handle the case differently) receive a supervisory review by the responsible clinical service chief, such as the chief of surgery. All physicians and dentists employed by VA are subject to privileging procedures. Privileging is the process by which a practitioner is granted permission by the institution to provide medical or other patient care services within defined limits on the basis of an individual’s clinical competence as determined by peer references. Privileging is done at the time of employment and every 2 years thereafter. However, a physician’s privileges can be examined at any time if a question about his or her performance or competence is raised. The National Practitioner Data Bank was created under Title IV of Public Law 99-660, the Health Care Quality Improvement Act of 1986. The act calls for (1) insurance companies and certain self-insured health care entities to report malpractice payments made for the benefit of a physician, dentist, or other licensed health care practitioner to the Data Bank and (2) hospitals and other authorized health care entities, licensing boards, and professional societies to report professional review actions relating to possible incompetence or improper professional conduct adversely affecting the clinical privileges, licensure, or membership in a professional society of a practitioner for longer than 30 days to the Data Bank. The intent of the act is to improve the quality of medical care by encouraging physicians, dentists, and other health care practitioners to identify and discipline those who engage in unprofessional behavior and to restrict the ability of incompetent physicians, dentists, and other health care practitioners to move from state to state without disclosure or discovery of their previous damaging or incompetent performance. The Data Bank acts as a clearinghouse for information about licensed practitioners’ paid malpractice claims and adverse actions on licensure, clinical privileges, and professional society membership. It has two main functions: (1) responding to queries about practitioners from authorized health care entities and hospitals and (2) collecting and storing adverse actions and malpractice payment information. Although the act does not require VA medical centers to participate in the Data Bank, it directs the Secretary of Health and Human Services (HHS) to enter into a memorandum of understanding with the Administrator of the Veterans Administration (now VA) to apply the reporting requirements of the act to health care facilities under VA’s jurisdiction. Accordingly, a memorandum of understanding was signed in November 1990, followed by interpretive rules effective October 1991. VA’s physician peer review process is identifying cases needing management attention at the six medical centers that we visited. Specifically, in fiscal year 1993, peer reviewers at these locations reviewed a total of 563 cases referred from the occurrence screen process involving potential quality of care problems. In 373 of these cases, peer reviewers decided that most experienced, competent practitioners would have handled the case similarly; in 136 cases, the peer reviewers believed that most experienced, competent practitioners might have handled the case differently; and in 54 cases, the peer reviewers believed that most experienced, competent practitioners would have handled the case differently. Each of the VA medical centers that we visited uses occurrence screens to identify potential physician performance problems that may warrant a peer review. Under this process, cases are screened against a predetermined list of criteria, usually by nurses. Those cases that involve one or more of the occurrences will be reviewed to identify possible problems in patient care. Occurrences that are reviewed include, but are not limited to, the following: readmittance within 10 days of an inpatient stay; readmittance within 3 days of an outpatient visit; return to special care unit, such as intensive care; return to operating room; and death. Any case for which the occurrence screen results show that a potential quality of care problem may exist is referred to the cognizant service chief for medical peer review. Table 1 shows, by medical center, how the peer reviewers rated the 563 cases. VA guidance governing peer review of potential quality of care problems identified through occurrence screens states that when peer review indicates that practitioner care is less than optimal, the cases are sent to the service chief for a determination regarding corrective action. The actions chosen by the service chief will be communicated in writing to the chief of staff and the occurrence screen program coordinator. If no action is considered necessary, a notation to that effect should be made by the service chief. However, VA guidance does not explicitly state the extent to which (1) discussions with a practitioner should be documented or (2) the reasons for no action being taken should be justified. As a result, the worksheets provided to the occurrence screen coordinator generally contained no elaboration on the action taken. Of the 50 cases we reviewed where peer reviewers believed that most experienced, competent practitioners would have handled the case differently than the physician under review, 32 resulted in a discussion with the physician, 4 resulted in no action, 8 resulted in a policy change, and 6 resulted in counseling. Table 2 shows how the service chiefs at the medical centers we visited dealt with cases that their peer reviewers believed most experienced, competent practitioners would have handled differently. Service chiefs clearly favored a discussion of problems over any other type of action. But in 32 of the 50 level 3 cases in which a discussion took place, when we asked for documentation about what was actually discussed with the practitioner about the peer review findings or what, if any, corrective actions were agreed upon, we were told by staff that they could not find information in either the occurrence screen worksheets or minutes of the service meetings. Further, in the 4 cases we reviewed in which no action was taken by a service chief on peer reviewers’ findings, there was no indication in the occurrence screen worksheets as to why a decision to take no action was justified. VA regulations require cases meeting the occurrence screen criteria to be entered into an ongoing occurrence screen database, which is reviewed and analyzed regularly to identify patterns that may be problematic. However, when actions taken by the service chiefs are not being documented for future reference, corrective actions, if taken, cannot be identified and trends cannot be established to point the way for improvement. In 14 cases, evidence was present that action was taken on the peer reviewer’s findings. Specifically, in 8 cases, medical center management revised certain policies and procedures to ensure that the problems identified by peer reviewers would not recur. In 6 cases, physicians were provided counseling on the basis of the peer reviewer’s findings and a record of the incident was placed in the physician’s privileging file. The incidents triggering formal counseling included inappropriate medical management of a patient with diabetes; failure to diagnose, monitor, and treat patients; failure to communicate resuscitation plans for a terminally ill patient; failure to monitor patient response to medication and take appropriate action; and failure to assess a patient and order the correct dose of medication. Experts believe that a significant impediment to effective peer review is the inherent subjectivity involved in determining whether a potential quality of care problem exists. The development of practice guidelines that peer reviewers can use to make performance judgments is one method suggested by experts to reduce the subjectivity. For example, practice guidelines could reduce the tendency on the part of some peer reviewers to focus on the effect of a bad patient outcome rather than whether the standard of care was met. In a 1992 Journal of the American Medical Association article, an official in VA’s Office of Quality Management stated that the development of practice guidelines would be a great aid to improve peer review. In a corroborating article, the physician writing about peer review states that peer judgments regarding appropriateness of care are strongly influenced by perceived outcomes. This suggests that the standard of care is often unclear to reviewers. Practice guidelines are being developed with increasing frequency in both VA and the medical community as a whole. However, at least one expert does not believe that it will be possible to design guidelines that will take into account every possible factor that might constitute an exception to the standard. “picking skilled physician-reviewers may be the central and critical step. Simply choosing a peer physician may not be the best strategy; rather, identifying an expert in both the condition under study and in quality assessment purposes and techniques may be required.” At the six medical centers we visited, we found that classification of peer review findings is a highly subjective activity because no systemwide clinical criterion exists for peer reviewers to determine whether physicians would or would not have performed in the same manner as the physician under review. As indicated above, such a situation is not unique to VA and will be resolved only when a complete set of practice guidelines is used routinely. Until such criteria are generally available, a case that might be a level 1 in VA medical center A might be a level 3 in VA medical center F. Levels assigned to cases may also vary among the specialty services within the medical center. The degree to which the concept of peer review is accepted or embraced by physicians depends to a great extent on how the results of peer review are utilized by medical center management. Although we found differences among services within medical centers, four of the six VA medical centers we visited are using peer review primarily to evaluate physician performance and identify physicians who may have contributed to adverse patient outcomes. This approach is resulting in negative perceptions of the peer review process and is impeding its acceptance among physicians. At these facilities, several physicians questioned the usefulness of the peer review process and did not view it as having an important role in identifying opportunities for improving care. These physicians contend that peer review duplicates other quality assurance monitors. For example, the medical service units at each of the VA medical centers we visited hold morbidity and mortality conferences to discuss all deaths and clinical complications that occurred during the week preceding the meeting. Some of these cases are later selected for peer review. But, according to physicians involved in peer review, the peer reviews do not identify any issues that are not identified and discussed in the morbidity and mortality conferences. Physicians also told us that peer review committee findings have more credibility than the findings of a single peer reviewer because the subjectivity inherent in determining quality of care is reduced. Other benefits of the committee approach include identifying the underlying problem that led to an adverse outcome and greater physician acceptance of peer review. Physicians told us that by focusing on the identification of system issues, they are better able to identify the underlying cause of an adverse outcome and prevent it from occurring again. Physicians who are members of peer review committees also told us that the anonymity associated with peer review committees allows them to be open and honest in their evaluations. Officials from one VA medical center that switched from using a single reviewer to a peer review committee stated that the number of cases rated level 2 or 3 rose when they began using a peer review committee. Specifically, during the first 5 months of 1994, the committee assigned more level 3 designations to cases than did individual reviewers in all of 1993. At another medical center that began using peer review committees, the number of cases rated level 2 or 3 by a committee increased by more than 60 percent. The Health Care Quality Improvement Act of 1986 requires that all malpractice claims paid on the behalf of a practitioner be reported to the Data Bank. However, under rules setting forth VA’s policy for participation in the Data Bank, VA will file a report with the Data Bank regarding any malpractice payment for the benefit of a physician, dentist, or other licensed practitioner only when the director of the facility at which the act or omission occurred affirms the conclusion of a peer review panel that payment was related to substandard care, professional incompetence, or professional misconduct. Thus, before reporting a practitioner to the Data Bank after a malpractice payment is made, VA is in effect requiring the peer review panel to make a determination that either the standard of care was not met or that a practitioner was guilty of professional incompetence or misconduct. Adherence to these procedures results in VA medical centers’ not reporting to the Data Bank all malpractice payments made on behalf of their practitioners. The process followed by VA medical centers to deal with malpractice claims is as follows: Within 30 days of a claim being filed, the appropriate VA district counsel notifies the medical center involved in providing the medical care identified in the allegations that a claim has been filed. Medical center personnel then conduct a peer review to determine if the appropriate standards of care were met. These standards can relate to any part of the system (for example, hospital, outpatient care, equipment, systems in place, and practitioners). The medical center forwards the results of the peer review along with a copy of the Tort Claim Information System data and a copy of the patient’s medical record to both the Armed Forces Institute of Pathology and the appropriate VA district counsel. Upon receipt of the results of the initial peer review, the district counsel can make a request for the medical opinion of an external expert. Finally, the VA district counsel can settle or deny a claim. If a payment is made on the claim, the responsible medical center director will convene a second peer review panel to determine if an identifiable licensed health care practitioner is involved in the case. During this review, a determination is made as to whether the acts or omission of the practitioners in relation to the patient injury for which the settlement or judgment was made constituted care that did not meet generally accepted standards of professional competence or conduct. The recommendations of this panel should determine whether the practitioner involved in the incident is reported to the Data Bank. However, before approving the report, the director will notify the practitioner to be reported and provide him or her with an opportunity to discuss the situation with appropriate medical center officials, including the director. At the six medical centers we visited, we reviewed 53 paid claim files in which the claim alleged that an adverse patient outcome was caused by a licensed practitioner(s). We found that it was possible to determine the practitioner(s) associated with the adverse patient outcome in each of the 53 claims. However, only four of these individuals were reported to the Data Bank. The remaining practitioners were not reported for a variety of reasons, including determination by the panel that the standard of care was met (13); inability to identify the practitioner responsible for the patient (3); problem was considered to be a system failure (4); belief that the resident rather than the attending physician was to blame for the incident (3); patient was at fault (2); no evidence of misconduct, negligence, or malpractice (6); panel split on the need to report (1); and practitioner behavior was not clearly outside the standards of practice (1). Further, from October 28, 1991, to September 30, 1994, only 73 practitioners from 1,047 paid claims for all VA medical centers were reported to the Data Bank. (See app. III.) In his response to this report, VA’s Under Secretary for Health stated that there is not necessarily an identifiable practitioner associated with every malpractice claim because (1) malpractice claims involving VA are filed against the United States of America and typically do not name practitioners, (2) payments made are on behalf of care provided at a VA facility, and (3) the act or omission for which payment was made is not necessarily practitioner-related. The Under Secretary concluded that (1) the VA peer review process is necessary to determine if there is an identifiable licensed health care provider for whom it can be said that payment was made and (2) only if there is an identifiable practitioner can it be said that the payment was on his or her behalf. We agree that malpractice claims are filed against the United States of America and not against individual practitioners. We found, however, that identifying practitioners involved in a malpractice claim and on whose behalf it can be said payment was made is not difficult. Our review of 558 malpractice claims involving VA that were paid during fiscal years 1992 and 1993 shows that 422, or 76 percent, involved claims in which it was alleged that an adverse patient outcome was caused by a licensed practitioner(s). Of these practitioners, 409 were physicians. Under its memorandum of understanding with HHS, VA has agreed to report to the Data Bank through state licensing boards any action that for longer than 30 days reduces, restricts, suspends, or revokes the clinical privileges of a physician or dentist due to incompetence or improper professional conduct. However, regardless of the length of time an individual’s privileges have been affected, VA will not report adverse actions, including suspensions lasting longer than 30 days, to the Data Bank until all internal appeals have been satisfied. Such a policy is not required by the act and can delay reporting for a considerable time. For example, one VA medical center we visited suspended the privileges of two physicians in 1993 and terminated their employment in 1994. One of these physicians was reinstated in March 1995 with a formal reprimand. As of April 4, 1995, the other was still involved in the internal appeals process. Neither has been reported to the Data Bank. VA’s privileging process includes, among other things, evaluation of a physician’s relevant experience and current competence. It also includes consideration of any information related to medical malpractice allegations or judgments, loss of medical staff membership, loss or reduction of clinical privileges, or challenges to licensure. In addition, the evaluation must be determined using evidence of an individual’s current competence. Initial privileging is done at the time of employment and every 2 years thereafter. However, a physician’s privileges can be examined at any time if the situation requires it; for example, when there is a question of physician competency or professional conduct. From October 28, 1991, through September 30, 1994, nine medical centers reported 11 adverse actions to the Data Bank. However, our analysis shows that the adverse reporting rate for VA medical centers is lower than the adverse reporting rate of community hospitals. For example, in California, VA has 4,008 beds and reported 2 adverse actions for an average reporting rate of 0.50 reports per 1,000 beds. Conversely, community hospitals in California have 105,270 beds and reported 390 adverse actions for an average reporting rate of 3.7 reports per 1,000 beds. (See app. IV for a complete reporting comparison by state.) The Under Secretary for Health, in responding to this report, stated that VA reporting rates are not comparable with community hospital rates because VA practitioners are employees of VA, not independent entrepreneurs. The Under Secretary believes that through appropriate supervision, service chiefs at the medical centers are identifying problems and through supervision and progressive discipline, if necessary, issues are handled before formal privileging actions occur. Conversely, in a community hospital, practitioners are not typically employees of the organization, and the formal privileging review process is the only legitimate process for review. The Under Secretary noted, however, that VA policy requires that licensed health care practitioners who leave VA employment while under investigation be reported to the Data Bank immediately. Service chiefs at the medical centers we visited told us that they use formal and informal processes to deal with physicians who have performance problems. Formal procedures require due process hearings that (1) take time to administer, (2) require much documentation, and (3) involve extensive understanding of the regulations and guidelines governing such actions. For example, in fiscal years 1993 and 1994, action was taken to officially remove three physicians at the medical centers we visited. The time involved from the initiation of disciplinary action to ultimate removal ranged from 5-1/2 months to a little over 1 year. Reasons for the varying time frames include complexity of the issues involved (such as professional misconduct versus quality of care), multiple independent peer reviews necessary in two cases and not in the other, and the extent to which the physicians fought the disciplinary actions. In each case, the physician’s privileges were restricted for more than 30 days; however, only one of the three cases was reported to the Data Bank. VA policy requires that the appeals process be completed before any case is reported to the Data Bank, and these physicians had appealed the suspension and revocation of their privileges and the termination of their employment. Service chiefs at the medical centers we visited also used an informal process to remove physicians who had performance problems. However, the effect is that physicians who may have performance problems are not reported to the Data Bank. Further, one service chief told us that he tends to hire part-time physicians to avoid having to adhere to the formal procedures for dealing with problem physicians. The following is an example of a situation that resulted in the removal of a problem physician through informal means. A service chief reduced a physician’s privileges and personally supervised the physician for 6 months to determine the physician’s competence level. The service chief concluded that the physicians’ medical skills did not improve during the time of observation and recommended to the physician that he resign. The physician took this advice and resigned from the medical center. But no documentation of restricted privileges or other problems appeared in the physician’s credentialing and privileging file. Although physician peer review is performed at the VA medical centers that we visited and cases of questionable quality of care are identified, actions taken by service chiefs as the result of peer review findings are seldom made a matter of record in peer review files. Such information could allow management to track the performance of practitioners over time and help ensure that any pattern of less than optimal care is quickly identified. Documentation also establishes the degree to which management addressed the issues raised by peer reviewers. From an organizational perspective, this establishes accountability on the part of service chiefs, increases practitioner awareness of the importance that the medical center places on the delivery of quality care, and is a good risk-management tool because it requires managers to go on record as to how a potential problem was addressed. By establishing restrictive Data Bank reporting procedures, VA has shielded its physicians from the professional accountability that is required of private sector practitioners. In so doing, VA could be facilitating the delivery of substandard care outside the VA health care system by allowing practitioners with poor performance records to leave its employment with no record of having been involved in a malpractice claim or an adverse action. Conversely, failure to report also allows some physicians who provide patients with less than optimal care to remain in the VA system without any indication on their record that problems may exist with their performance. We recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to require service chiefs to fully document all discussions held with practitioners involved in cases that peer reviewers conclude that most experienced, competent practitioners might or would have handled differently, and revise the criteria now being used by medical centers to report VA practitioners to the National Practitioner Data Bank so that they are more consistent with the reporting practices now used in the private sector. VA’s Under Secretary for Health concurred with our recommendation that service chiefs fully document all discussions held with practitioners and stated that VA will reinforce, on a systemwide basis, the requirement that service chiefs must fully document appropriate actions taken in response to peer review conclusions. The Under Secretary also concurred in principle with our recommendation relating to reporting to the National Practitioner Data Bank. While he does not believe that a change in policy is needed for the reporting of malpractice payments, he does agree that more timely reporting of initial summary suspensions of physician privileges lasting longer than 30 days is an option. In this regard, he said that a group of knowledgeable program staff will explore all policy options and report their recommendations to him by the end of September 1995. Under VA’s current procedures, the postpayment peer review is made to determine if there is an identifiable licensed health care practitioner responsible for a breach in care. The Under Secretary stated that effective May 19, 1995, these reviews will be completed outside of the medical center for which payment was made (for example, in another medical center). This is an interim measure, and VA is in the process of pursuing peer review options that are external to the VA system, such as utilization of the clinical reviewers participating in VA’s External Peer Review Program. We disagree with the Under Secretary’s contention that no policy change is needed with respect to the reporting of malpractice payments. VA’s policy of reporting only those malpractice payments involving practitioners who have been determined to have breached the standard of care remains more restrictive than required under Public Law 99-660. The law requires only that all malpractice payments made on behalf of a physician or licensed health care practitioner be reported to the Data Bank. In addition, the law states that payment of a claim should not be construed as creating a presumption that medical malpractice has occurred. Thus, any post-payment peer review need only determine that the payment was for the benefit of a practitioner, not that it results from a breach in care. We also believe that reporting initial summary suspensions rather than only final actions should be viewed as more than an option. VA’s memorandum of understanding with HHS clearly states that it will report to the Data Bank any action that for longer than 30 days reduces, restricts, suspends, or revokes the clinical privileges of a physician or dentist due to incompetence or improper professional conduct. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, copies will be sent to appropriate congressional committees; the Secretary of Veterans Affairs; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. If you have questions on this report, please contact James Carlan, Assistant Director, Federal Health Care Delivery Issues, on (202) 512-7120. Other staff contributing to this report were team coordinators Patrick Gallagher and Patricia Jones and team members Deena M. El-Attar, Barbara Mulliken, and George Bogart. To accomplish our review, we interviewed VA’s medical inspector and officials in VA’s Professional Affairs Office, Quality Management Planning and Evaluation Office, Office of Personnel and Labor Relations, and Office of General Counsel. The objective of these interviews was to obtain information on (1) the role of peer review in evaluating physicians and reporting to the National Practitioner Data Bank and state licensing boards and (2) how VA’s Tort Claim Information System (TCIS) was developed and is being utilized. We also visited six VA medical centersselected on the basis of the number of paid malpractice claims made on behalf of these facilities. At each location, we (1) interviewed quality assurance personnel, physicians who served as peer reviewers, and service chiefs to obtain their perspectives on the peer review process and (2) reviewed policies and procedures for peer review quality assurance programs, minutes of any meetings that dealt with potential quality of care issues, and documentation pertaining to 191 peer reviews made as a result of an occurrence screen. We also reviewed peer review documentation for 80 tort claims paid and pending for practitioners in 1992 and 1993 at the six medical centers we visited. In addition, we obtained the Armed Forces Institute of Pathology analysis of VA tort claim information for fiscal year 1993 for all VA medical centers and reviewed HHS information on VA’s participation in reporting to the Data Bank. Under the multiple independent reviewer approach, which is being used at the Cleveland, Hines, and Martinsburg medical centers, physicians selected by the service chief individually review the work of a colleague within the same service; for example, surgeons review the work of other surgeons. During this review, the medical records associated with a case are examined and any physicians or others involved in the case may be interviewed. Each peer reviewer independently evaluates the quality of care involved in the case and makes a preliminary determination as to how, in his or her opinion, other physicians would have handled the case. In those cases where the service chief and a peer reviewer disagree, the service chief’s opinion will prevail. The service chief also determines the extent to which follow-up action will be taken on the case. The Fayetteville, Houston, and St. Louis medical centers use a committee approach to peer review. While each committee is multidisciplinary and comprised of elected or appointed representatives from the major medical services such as surgery and medicine, each committee conducts peer reviews somewhat differently. In Fayetteville, the peer review committee, which consists of all the service chiefs, performs the peer review as a group and determines what action to take. The Houston peer review committee selects individual members of the peer review committee to review cases and present their findings to the entire committee for discussion and level determination. While the committee makes the final peer review level determination, the service chiefs determine what action to take. In St. Louis, all service level peer reviews are submitted to a Quality Assurance/Quality Improvement Committee, which then performs another peer review to validate the original review. The committee has the final decision-making authority regarding the level assigned and will often recommend what action should be taken and then follow up to ensure that the recommended action occurs. East Orange, New Jersey (continued) This appendix presents a comparison of VA’s and community hospitals’ reported adverse actions per 1,000 hospital beds. This analysis shows that VA hospitals are not reporting at the same rate as other hospitals in the same state. The analysis used information from an HHS Inspector General’s report that concluded that most hospitals are underreporting to the Data Bank. VA’s adverse action reports are from its first 3 years’ participation in the Data Bank, October 28, 1991, through September 30, 1994. The community hospitals’ adverse action reports are from the first 3-1/2 years of the Data Bank’s operation, September 1, 1990, through December 31, 1993. Only nine VA medical centers in seven states reported adverse actions. Hospitals in all states reported adverse actions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined the relationship between problem identification and problem resolution in the Department of Veterans Affairs' (VA) physician peer review process, focusing on: (1) how the results of VA peer review are being used in disciplining physicians with performance problems; (2) the impediments to effective peer review; and (3) whether VA is taking action against physicians who are not performing in accordance with professional standards. GAO found that: (1) actions taken by VA to address quality of care problems are often limited to undocumented discussions with the physicians involved; (2) there is generally no record of the extent to which quality of care problems are addressed or the actions taken to deal with the problems identified; (3) VA is developing practice guidelines and using peer review to help reduce heavy reliance on professional judgment in peer review; and (4) VA medical centers are not reporting many actions taken against physicians to the National Practitioner Data Bank because of their restrictive reporting procedures.
The use of information technology has created many benefits for agencies such as IRS in achieving their missions and providing information and services to the public, but extensive reliance on computerized information also creates challenges in securing that information from various threats. Information security is especially important for government agencies, where maintaining the public’s trust is essential. Without proper safeguards, computer systems are vulnerable to individuals and groups with malicious intentions who can intrude and use their access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. Concerns about the risk to these systems are well founded for a number of reasons, including the increase in reports of security incidents, the ease of obtaining and using hacking tools, and steady advances in the sophistication and effectiveness of attack technology. The Federal Bureau of Investigation has identified multiple sources of threats, including foreign entities engaged in intelligence gathering and information warfare, domestic criminals, hackers, virus writers, and disgruntled employees or contractors working within an organization. In addition, the U.S. Secret Service and the CERT® Coordination Center studied insider threats in the government sector and stated in a January 2008 report that “government sector insiders have the potential to pose a substantial threat by virtue of their knowledge of, and access to, employer systems and/or databases.” Insider threats include errors or mistakes and fraudulent or malevolent acts by insiders. Our previous reports, and those by federal inspectors general, describe persistent information security weaknesses that place federal agencies, including IRS, at risk of disruption, fraud, or inappropriate disclosure of sensitive information. Accordingly, we have designated information security as a governmentwide high-risk area since 1997, most recently in 2011. Information security is essential to creating and maintaining effective internal controls. The Federal Managers’ Financial Integrity Act of 1982 requires us to prescribe standards for internal control in federal The standards provide the overall framework for establishing agencies. and maintaining internal control and for identifying and addressing major performance and management challenges and areas at greatest risk of fraud, waste, abuse, and mismanagement. The term “internal control” is synonymous with the term “management control,” which covers all aspects of an agency’s operations (programmatic, financial, and compliance). The attitude and philosophy of management toward information systems can have a profound effect on internal control. Information system controls consist of those internal controls that are dependent on information systems processing and include general controls (security management, access controls, configuration management, segregation of duties, and contingency planning) at the entitywide, system, and business process application levels; business process application controls (input, processing, output, master file, interface, and data management system controls); and user controls (controls performed by people interacting with information systems). Recognizing the importance of securing federal agencies’ information systems, Congress enacted the Federal Information Security Management Act (FISMA) in December 2002 to strengthen the security of information and systems within federal agencies. FISMA requires each agency to develop, document, and implement an agencywide information security program for the information and information systems that support the operations and assets of the agency, using a risk-based approach to information security management. Such a program includes assessing risk; developing and implementing cost-effective security plans, policies, and procedures; providing specialized training; testing and evaluating the effectiveness of controls; planning, implementing, evaluating, and documenting remedial actions to address information security deficiencies; and ensuring continuity of operations. The act also assigned to the National Institute of Standards and Technology (NIST) the responsibility for developing standards and guidelines that include minimum information security requirements. IRS has demanding responsibilities in collecting taxes, processing tax returns, and enforcing federal tax laws, and relies extensively on computerized systems to support its financial and mission-related operations. In fiscal years 2011 and 2010, IRS collected about $2.4 trillion and $2.3 trillion, respectively, in federal tax payments; processed hundreds of millions of tax and information returns; and paid about $416 billion and $467 billion, respectively, in refunds to taxpayers. Further, the size and complexity of IRS add unique operational challenges. IRS employs over 100,000 people in its Washington, D.C., headquarters and over 700 offices in all 50 states and U.S. territories and in some U.S. embassies and consulates. To manage its data and information, the agency operates three enterprise computing centers located in Detroit, Michigan; Martinsburg, West Virginia; and Memphis, Tennessee. IRS also collects and maintains a significant amount of personal and financial information on each U.S. taxpayer. Protecting the confidentiality of this sensitive information is paramount; otherwise, taxpayers could be exposed to loss of privacy and to financial loss and damages resulting from identity theft or other financial crimes. The Commissioner of Internal Revenue has overall responsibility for ensuring the confidentiality, integrity, and availability of the information and information systems that support the agency and its operations. FISMA requires the chief information officer or comparable official at a federal agency to be responsible for developing and maintaining an information security program. IRS has delegated this responsibility to the Associate Chief Information Officer for Cybersecurity, who heads the Office of Cybersecurity. The Office of Cybersecurity’s mission is to protect taxpayer information and IRS’s electronic systems, services, and data from internal and external cybersecurity-related threats by implementing security practices in planning, implementation, risk management, and operations. IRS develops and publishes its information security policies, guidelines, standards, and procedures in the Internal Revenue Manual and other documents in order for IRS divisions and offices to carry out their respective responsibilities in information security. In October 2011, the Treasury Inspector General for Tax Administration (TIGTA) stated that security of taxpayer data, including securing computer systems, was the top priority in its list of top 10 management challenges for IRS in fiscal year 2012. Despite IRS’s efforts, weaknesses in controls over key financial and tax- processing systems continue to jeopardize the confidentiality, integrity, and availability of financial and taxpayer information. Specifically, IRS continues to face challenges in controlling access to its information resources. Although IRS has various initiatives under way to address control weaknesses, it has not consistently or fully implemented controls for identifying and authenticating users, authorizing access to resources, ensuring that sensitive data are encrypted, monitoring actions taken on its systems, or controlling physical access to its resources. In addition, outdated and unsupported software exposes IRS to known vulnerabilities, and shortcomings in performing system backup place the availability of data at risk. An underlying reason for these weaknesses is that IRS has not fully implemented key components of its information security program. These include completing corrective actions for identified weaknesses in its risk assessment process; establishing consistent and specific policies and procedures; ensuring that security plans reflect IRS’s current environment; ensuring that contractors receive security training; effectively testing and evaluating policies, procedures, and controls; and validating corrective action plans. During fiscal year 2011, IRS management devoted attention and resources to addressing the agency’s information security control weaknesses. However, until IRS takes further steps to correct these weaknesses, financial and taxpayer data are at increased risk of unauthorized disclosure, modification, or destruction, which could result in misstatement of financial data and management decisions that are based on unreliable information. A basic management objective for any organization is to protect the resources that support its critical operations from unauthorized access. Organizations accomplish this objective by designing and implementing controls that are intended to prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities. Access controls include those related to user identification and authentication, authorization, cryptography, audit and monitoring, and physical security. However, IRS did not fully implement effective controls in these areas. Without adequate access controls, unauthorized individuals may be able to log in, access sensitive information, and make undetected changes or deletions for malicious purposes or personal gain. In addition, authorized individuals may be able to intentionally or unintentionally view, add, modify, or delete data to which they should not have been given access. A computer system needs to be able to identify and authenticate each user so that activities on the system can be linked and traced to a specific individual. An organization does this by assigning a unique user account to each user, and in so doing, the system is able to distinguish one user from another—a process called identification (ID). The system also needs to establish the validity of a user’s claimed identity by requesting some kind of information, such as a password, that is known only by the user— a process known as authentication. The combination of identification and authentication—such as user account-password combinations—provides the basis for establishing individual accountability and for controlling access to the system. The Internal Revenue Manual requires the use of a strong password for authentication (defined as a minimum of eight characters, containing at least one numeric or special character, and a mixture of at least one uppercase and one lowercase letter). The manual also states that database account passwords are not to be reused within 10 password changes and that the password grace period for a database—the number of days an individual has to change his or her password after it expires—should be set to 10. IRS had implemented various password controls, but weaknesses existed. For the Oracle database supporting its authorization system, IRS enforced strong password policies on active user accounts. However, IRS did not set appropriate password reuse maximum time or ensure complex password verification checking for its procurement system. As a result of these weaknesses, increased risk exists that an individual with malicious intentions could gain inappropriate access to sensitive IRS applications and data on these systems, and potentially use the access to attempt compromises of other IRS systems. Authorization is the process of granting or denying access rights and permissions to a protected resource, such as a network, a system, an application, a function, or a file. According to NIST, access control policies and access enforcement mechanisms are employed by organizations to control access between users (or processes acting on behalf of users) and objects in the information system. Furthermore, it notes that access enforcement mechanisms are employed at the application level, when necessary, to provide increased information security for the organization. According to the Internal Revenue Manual, the agency should implement access control measures that provide protection from unauthorized alteration, loss, unavailability, or disclosure of information. The manual also requires that system access should be granted based on the principle of least privilege—allowing access at the minimum level necessary to support a user’s job duties. In addition, its policy states that a servicewide medium/process shall be used to register all users for access to any IRS information technology resource to which they require access. IRS policy also requires that all accounts be deactivated within 1 week of an individual’s departure on friendly terms and immediately on an individual’s departure on unfriendly terms. IRS has taken steps to address access authorization controls, but weaknesses exist. For example, it has appropriately restricted access to disaster recovery servers, and has implemented a capability to identify and correct potential anomalies in mainframe access definitions. Also, it has removed users with inappropriate access to a mainframe database supporting a financial system. However, additional authorization controls were not always functioning as intended, and access authorization policies were not effectively implemented. For example, systems used to process tax and financial information did not fully prevent access by unauthorized users or excessive levels of access for authorized users. More specifically, IRS has implemented an access authorization control for a system used to process electronic tax payment information; however, users had the capability to circumvent this control and gain access to this system’s server. Insecurely configured software used to support this system also exposed it to unauthorized users. In addition, IRS’s compliance checks revealed unauthorized access to another system. During its monthly compliance check in August 2011, the agency identified 16 users who had been granted access to the procurement system without receiving approval from the agency’s authorization system. Also, the data in a shared work area used to support accounting operations were fully accessible by network administration staff although they did not need such access. Further, IRS has not taken actions to appropriately restrict services and user access, and to remove active application accounts in a timely manner for employees who had separated or no longer needed access. IRS noted additional authorization controls to compensate for or mitigate known deficiencies; however, these controls were not always implemented. For example, although IRS cited the use of role-based access for a major system used to process taxpayer data, this control was not yet implemented. Until IRS appropriately controls users’ access to its systems and effectively implements its procedures for authorization, the agency has limited assurance that its information resources are protected from unauthorized access, alteration, and disclosure. Cryptography underlies many of the mechanisms used to enforce the confidentiality and integrity of critical and sensitive information. A basic element of cryptography is encryption, which is used to transform plain text into cipher text using a special value known as a key and a mathematical process known as an algorithm. According to IRS policy, the use of insecure protocols should be restricted because their widespread use can allow passwords and other sensitive data to be transmitted across its internal network unencrypted. IRS continued to expand its use of encryption to protect sensitive data, but shortcomings remain. IRS took action to encrypt data transfers for its administrative accounting system. However, as we reported in 2011, the agency configured a server that transfers tax and financial data between internal systems to use protocols that allowed unencrypted transmission of sensitive data. IRS also had not rectified its use of unencrypted protocols for a sensitive tax-processing application, potentially exposing user ID and password combinations. By not encrypting sensitive data, increased risk exists that an unauthorized individual could view and then use the data to gain unwarranted access to its system or sensitive information. To establish individual accountability, monitor compliance with security policies, and investigate security violations, it is crucial to determine what, when, and by whom specific actions have been taken on a system. Organizations accomplish this by implementing system or security software that provides an audit trail—a log of system activity—that they can use to determine the source of a transaction or attempted transaction and to monitor users’ activities. The way in which organizations configure system or security software determines the nature and extent of information that can be provided by the audit trail. To be effective, organizations should configure their software to collect and maintain audit trails that are sufficient to track security-relevant events. The Internal Revenue Manual requires systems to implement operational and technical control guidance to monitor traffic on host intrusion detection systems, and also states that IRS should enable and configure audit logging on all systems to aid in the detection of security violations, performance problems, and flaws in applications. Additionally, IRS policy states that security controls in information systems shall be monitored on an ongoing basis. IRS had established several activities designed to support detection of questionable or unauthorized access to financial applications and data and to support its response; however, some of these activities were not fully in place or operating as intended. To assist in its audit and monitoring activities, IRS established the Enterprise Security Audit Trails (ESAT) Project Management Office, which is responsible for managing all enterprise audit initiatives and identifying and overseeing deployment and transition of various audit trail solutions. The program is currently in its early stages, but the agency is continually implementing new procedures building on the program’s initiatives. For fiscal year 2011, the agency had ESAT-related audit processes in place for four systems—only one of which was relevant to our financial statement audit efforts. However, the processes were not yet operating effectively. For example, ESAT had not delivered system audit reports covering a 4-month period for one financial application to the Office of the Chief Financial Officer in a timely manner, and appropriate management officials were not aware of this shortcoming. Other monitoring activities were also not always operating effectively. Although IRS had enabled audit logging for certain systems, it had not for others. For example, the agency had enabled and configured audit logging for UNIX operating systems on 31 servers reviewed. However, it had not enabled and configured monitoring activity for its authorization system. IRS officials recognized this shortcoming and indicated that they are working with cybersecurity staff to resolve this deficiency. Finally, IRS did not properly enable auditing features on its Oracle databases supporting three systems we reviewed. As a result of detection and response capabilities not being fully in place and certain deficiencies in configurations, IRS’s ability to establish individual accountability, monitor compliance with security policies, and investigate security violations was limited. Physical security controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. These controls involve procedures to authorize employees’ access to and control over unissued keys or other entry devices. At IRS, physical access control measures, such as physical access cards that are used to permit or deny access to certain areas of a facility, are vital to safeguarding its facilities, computing resources, and information from internal and external threats. The Internal Revenue Manual requires access controls that safeguard assets against possible theft and malicious actions. IRS policy also requires completion of appropriate access authorization documentation prior to issuance of physical access cards, and that such entry devices be inventoried once every 24 hours of each workday, including signing the inventory to verify that it has been completed. IRS implemented numerous physical security controls at its enterprise computing centers to safeguard assets against possible theft and malicious actions. For example, IRS had a dedicated guard force at each of its computing centers to, among other things, control physical access to restricted areas and secure entry devices such as physical access cards. In addition, the 30 individuals we selected for review had appropriate access to secure computing areas at the computing centers, and IRS had appropriately restricted access to master keys at the centers that used them. Further, IRS effectively screened visitors, and at one computing center, reviewed lists of employees authorized to enter restricted areas. Nevertheless, IRS did not always consistently authorize employees’ access to restricted areas or inventory physical access cards. At each of the computing centers, IRS had a process in place to authorize employees’ access to restricted areas. However, one of the centers did not document this authorization for 7 of 20 employees whose access authority we reviewed. In addition, although the guard force at each computing center performed an inventory to account for physical access cards, they did not consistently implement this control. For example, the guard forces at two of the three computing centers we visited did not always sign, thus providing accountability for, the inventory of physical access cards. In addition, at least one of three guard shifts did not detect an anomaly in the inventory for 4 of the 5 days we reviewed at one computing center. Further, several physical security weaknesses identified during previous audits remain unresolved. These include issues concerning management validation of access to restricted areas, proximity cards allowing inappropriate access, and unlocked cabinets containing network devices. As a result, IRS has reduced assurance that its computing resources and sensitive information are adequately protected from unauthorized access. In addition, IRS has cited its physical security controls as compensating or mitigating controls for other noted deficiencies; however, because of the weaknesses noted in these controls, IRS may not be able to rely on physical security as a compensating control. In addition to access controls, other important controls should be in place to ensure the confidentiality, integrity, and availability of an organization’s information. These controls include policies, procedures, and techniques for securely configuring information systems; segregating incompatible duties; and planning for continuity of operations. Configuration management involves, among other things, (1) verifying the correctness of the security settings in the operating systems, applications, or computing and network devices and (2) obtaining reasonable assurance that systems are configured and operating securely and as intended. Patch management, a component of configuration management, is an important element in mitigating the risks associated with software vulnerabilities. When a software vulnerability is discovered, the software vendor may develop and distribute a patch or work-around to mitigate the vulnerability. Without the patch, an attacker can exploit a software vulnerability to read, modify, or delete sensitive information; disrupt operations; or launch attacks against systems at another organization. Outdated and unsupported software is more vulnerable to attack and exploitation because vendors no longer provide updates, including security updates. Accordingly, the Internal Revenue Manual states that IRS will manage systems to reduce vulnerabilities by promptly installing patches. Specifically, it states that security patches should be applied within 30 days, and hardware and software on network devices should be promptly maintained and updated in response to identified vulnerabilities. The manual also states that system administrators should ensure the version of the operating system being used is one for which the vendor still offers standardized technical support. IRS made progress in updating certain systems. For example, the agency had provided an effective patch management solution for its Windows servers. IRS also upgraded its domain name system servers at the three computing centers. However, the agency did not always apply critical patches or ensure that versions of its operating systems were still supported by the vendor. For example, for one system we reviewed, the agency had not applied a security-related patch release within 30 days of its issuance to the UNIX operating system for 10 of the 14 production servers reviewed; the vendor issued the patch release in April 2011, but IRS had not yet installed it at the time of our site visit in June 2011. In addition, IRS had never installed numerous patch releases for the UNIX operating system supporting another system we reviewed, although this operating system has existed since March 2009. The 10 uninstalled security-related patch releases were considered “critical” by the vendor. By not installing security patches in a timely fashion, IRS increases the risk that known vulnerabilities in its systems may be exploited. The agency also used outdated software on all three reviewed servers used for remote access. Further, as we reported in March 2011, IRS was using unsupported versions of software on most network devices reviewed.outdated and unsupported operating systems increases security exposure, as the vendor will not be supplying any security patches to the unsupported operating system. Segregation of duties refers to the policies, procedures, and organizational structures that help ensure that no single individual can independently control all key aspects of a process or computer-related operation and thereby gain unauthorized access to assets or records. Often, organizations achieve segregation of duties by dividing responsibilities among two or more individuals or organizational groups. This diminishes the likelihood that errors and wrongful acts will go undetected, because the activities of one individual or group will serve as a check on the activities of the other. Conversely, inadequate segregation of duties increases the risk that erroneous or fraudulent transactions could be processed, improper program changes implemented, and computer resources damaged or destroyed. The Internal Revenue Manual requires that IRS divide and separate duties and responsibilities of functions among different individuals so that no individual has all necessary authority and system access to disrupt or corrupt a critical security process. In addition, IRS policy states that the primary security role of any database administrator is to administer and maintain database repositories for proper use by authorized individuals and that database administrators should not have system administrator access rights. IRS implemented appropriate segregation of duties controls. Specifically, IRS implemented controls to prevent the assignment of incompatible database and system access privileges that allow for the compromise of separation-of-duties controls. The agency also segregated duties for database and system administration for its procurement system. As a result, IRS has increased assurance that errors or wrongful acts will be detected. According to NIST, contingency planning is a critical component of emergency management and organizational resilience. To ensure that mission-critical operations continue, organizations should be able to detect, mitigate, and recover from service disruptions while preserving access to vital information. One facet of ensuring that mission-critical operations can be recovered is establishing an information system recovery and reconstitution capability so that the information system can be restored to its original state after a service disruption. Conducting a business impact analysis is a key step in the contingency planning process. A business impact analysis is an analysis of information technology system requirements, processes, and interdependencies used to characterize system contingency requirements and priorities in the event of a significant disruption. Moreover, it correlates the system with the critical mission/business processes and services provided, and based on that information, characterizes the consequences of a disruption. In addition, developing an information system contingency plan is a critical step in the process of implementing a comprehensive contingency planning program. Organizations should prepare plans that are clearly documented, communicated to staff who could be affected, and updated to reflect current operations. Further, testing contingency plans is essential in determining whether the plans will function as intended in an emergency situation. Another key aspect of contingency planning is the development of a disaster recovery plan. A disaster recovery plan is an information system-focused plan designed to restore operability of the target system, application, or computer facility infrastructure at an alternate site after an emergency. The information system contingency plan differs from a disaster recovery plan primarily in that the information system contingency plan procedures are developed for recovery of the system regardless of site or location. In contrast, a disaster recovery plan is primarily a site-specific plan. The Internal Revenue Manual requires business impact analyses for systems, and includes steps for completing this process. More specifically, the business impact analysis should (1) identify business requirements and the purpose of the application undergoing the business impact analysis, (2) identify outage tolerances and impacts, and (3) identify recovery priorities. The manual also requires that IRS develop, test, and maintain information system contingency plans for all systems, and review and update these plans. In addition, IRS policy calls for the development of disaster recovery plans for each information system to ensure that, after disruption, the system can be restored to its full operational status. Moreover, the policy notes that the disaster recovery plan should define the resources, roles, responsibilities, actions, tasks, and the detailed work steps (keystrokes) required to restore an information technology system to its full operational status at the current or alternate facility after a major disruption with long-term effects. Further, according to policy, IRS shall implement and enforce backup procedures for all systems and information. IRS had processes in place to ensure continuity of operations; however, one of the disaster recovery plans we reviewed lacked detail, and backup procedures were not always effectively implemented for a key tax- processing system. For the five business impact analyses that we reviewed, IRS generally developed these business impact analyses by identifying business requirements and the purpose of the application, outage tolerances and impacts, and recovery priorities. IRS had developed, reviewed, and updated the five information system contingency plans that we reviewed. Further, these plans were tested within the past year. For the five disaster recovery plans that we reviewed, IRS had generally developed these plans defining the resources, roles, and responsibilities required to restore the respective systems to their full operational status. However, the disaster recovery plan for IRS’s system used to authorize access to its information resources did not include detailed work steps (keystrokes) required to restore the system. IRS did not effectively implement and enforce backup procedures for a key tax-processing system. As a result, during a fiscal year 2011 test, IRS was unable to demonstrate continuity of business processes for a key system used to process taxpayer data. Specifically, although agency officials noted that the operating system component was able to be restored, the system was missing 1 week of critical data essential for business processing because the backup process was not executed as planned. With the exception of this system, all other systems reviewed, which had conducted a disaster recovery test, demonstrated that they were able to be successfully recovered. Until the agency develops a disaster recovery plan for its authorization system to include detailed work steps (keystrokes) required to restore the system, and effectively implements and enforces its backup procedures for its system used to process taxpayer data, IRS may be unable to restore its authorization system to its full operational status after a major disruption, and its ability to reconstitute key business processes critical to IRS’s mission may be limited. A key reason for the information security weaknesses in IRS’s financial and tax-processing systems is that it has not yet fully implemented critical components of its comprehensive information security program. FISMA requires each agency to develop, document, and implement an information security program that, among other things, includes periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information and information systems; policies and procedures that (1) are based on risk assessments, (2) cost-effectively reduce information security risks to an acceptable level, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements; plans for providing adequate information security for networks, facilities, and systems; security awareness training to inform personnel of information security risks and of their responsibilities in complying with agency policies and procedures, as well as training personnel with significant security responsibilities for information security; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; and a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in its information security policies, procedures, or practices. IRS has made progress in developing and documenting certain elements of its information security program. During fiscal year 2011, IRS management devoted attention and resources to addressing the agency’s information security controls. For example, IRS formed cross-functional working groups with knowledge of its internal systems to address identified areas considered at risk. IRS also acknowledged that maintaining effective information security controls, at the individual system or component level in its large internal network, presents significant challenges. In addition, the agency cited actions taken to implement additional controls designed to partially compensate for and mitigate the risks associated with previously identified information security weaknesses, including weaknesses related to its internal network, database, and mainframe security; procurement and administrative accounting applications; and internal control monitoring. However, as we reported in our fiscal year 2011 financial audit report, these additional controls were not always operating as intended or were not effective in compensating for the associated weaknesses. To bolster the security of its networks and systems and to address its information security weaknesses, IRS has provided a comprehensive framework for its information security program. The agency has initiatives under way to further enhance its security posture. For example, during fiscal year 2011, IRS continued to implement a Security Compliance and Posture Monitoring and Reporting program to measure, monitor, and report compliance with security controls. As long as these efforts remain flexible to address changing technology and evolving threats, include our findings and those of TIGTA in measuring success, and are fully and effectively implemented, they should improve the agency’s overall information security posture. However, despite establishing a comprehensive framework for its information security program, IRS has not fully implemented all components of its program. These include identifying risks; ensuring consistent and specific policies and procedures; updating all system security plans; providing security training to all personnel, including contractors; effectively testing and evaluating policies, procedures, and controls; and validating corrective actions. According to NIST, risk is determined by identifying potential threats to the organization and vulnerabilities in its systems, determining the likelihood that a particular threat may exploit vulnerabilities, and assessing the resulting impact on the organization’s mission, including the effect on sensitive and critical systems and data. Identifying and assessing information security risks are essential to determining what controls are required. Moreover, by increasing awareness of risks, these assessments can generate support for the policies and controls that are adopted in order to help ensure that these policies and controls operate as intended. In conjunction with NIST guidance, IRS requires its risk assessment process to detail the residual risk assessed, as well as potential threats, and to recommend corrective actions for reducing or eliminating the vulnerabilities identified. The Internal Revenue Manual also requires system risk assessments to be reviewed annually and updated a minimum of every 3 years or whenever there is a significant change to the system, the facilities where the system resides, or other conditions that may affect the security or status of system accreditation. IRS had processes in place to identify and assess information security risks for the five systems that we reviewed. For example, the agency used a detailed methodology to conduct risk assessments with key steps that include threat and vulnerability identification, control analysis, impact analysis, and mitigation recommendations. The risk assessments that we reviewed included, among other things, risk and severity level determination, impact analyses, and recommendations to correct or mitigate threats and vulnerabilities that were identified. Further, IRS also addressed a previously identified weakness regarding ensuring the review of risk assessments for its systems on at least an annual basis. Although IRS had a risk assessment process in place, it had not fully implemented the process. For example, IRS’s general ledger system for tax-related activities was moved from one mainframe environment to another at a different facility, but the risk assessment was not updated. We previously recommended that IRS update the assessment, and the agency was in the process of addressing this issue at the time of our review. Until IRS fully implements its policies and procedures for risk assessments, potential risks to its systems and the adequacy of associated security controls to reduce these risks could be unknown. Another key element of an effective information security program is to develop, document, and implement risk-based policies, procedures, and technical standards that govern the security of an agency’s computing environment. If properly developed and implemented, policies and procedures should help reduce the risk associated with unauthorized access or disruption of services. Technical security standards can provide consistent implementation guidance for each computing environment. Developing, documenting, and implementing security policies are the primary mechanisms by which management communicates its views and requirements; these policies also serve as the basis for adopting specific procedures and technical controls. In addition, agencies need to take the actions necessary to effectively implement or execute these procedures and controls. Otherwise, agency systems and information will not receive the protection that the security policies and controls should provide. With only a few exceptions, IRS had developed and documented its information security policies and procedures. These policies and procedures generally address multiple information security components, including risk assessment, security planning, security training, testing and evaluating security controls, and contingency planning. However, we noted instances where documentation had not been fully developed or documented for systems that we reviewed. For example, IRS had not documented a baseline configuration standard for tasks initiated on its documented monitoring procedures that staff used to review audit logs for a key financial system; fully documented monitoring procedures for its procurement system, specifically supervisory review procedures for ensuring access privileges were appropriate for segregation of duties; or addressed prior recommendations associated with policies and procedures. These recommendations covered issues such as securely configuring routers to encrypt network traffic, configuring switches to defend against attacks that could crash the network, notifying the Computer Security Incident Response Center of network changes that could affect its ability to detect unauthorized access, and ensuring password controls are consistent. Without comprehensive and fully documented policies and procedures, IRS has limited assurance that staff will consistently implement effective controls over systems and that its information systems will be protected as intended. For example, we identified shortcomings in controls associated with the mainframe configuration and system monitoring. An objective of system security planning is to improve the protection of information technology resources. A system security plan provides an overview of the system’s security requirements and describes the controls that are in place or planned to meet those requirements. The Office of Management and Budget’s (OMB) Circular A-130 requires that agencies develop system security plans for major applications and general support systems, and that these plans address policies and procedures for providing management, operational, and technical controls. In addition, the Internal Revenue Manual requires that security plans for information systems be developed, documented, implemented, reviewed annually, and updated a minimum of every 3 years or whenever there is a significant change to the system. In addition, these plans should describe the security controls in place or planned for IRS systems. IRS generally had developed, documented, and updated its system security plans. IRS documented its management, operational, and technical controls in each of the five security plans that we reviewed. These plans were also reviewed within the 3-year time period as required by IRS policy and included information as required by OMB Circular A- 130 for major applications and general support systems. However, in March 2011, we reported that the system security plan for one application still reflected controls from the previous environment even though IRS had moved this application from one mainframe to another. We recommended that IRS update the application security plan to describe controls in place in its current mainframe operating environment. IRS had initiated, but not completed, its efforts to update the plan. Without an updated system security plan for this major financial application, IRS cannot ensure that the most appropriate security controls are in place to protect the critical information this system houses. People are one of the weakest links in attempts to secure systems and networks. Therefore, an important component of an information security program is providing sufficient training so that users understand system security risks and their own role in implementing related policies and controls to mitigate those risks. The Internal Revenue Manual requires that all personnel performing information technology security duties meet minimum continuing professional education hours in accordance with their roles. Individuals performing a security role are required by IRS to have 12, 8, or 4 hours of specialized training per year, depending on their specific role. IRS policy also requires that all new employees and contractors receive security awareness training within the first 10 working days. IRS had processes in place for providing employees with security awareness and specialized training. All employees with specific security- related roles and newly hired employees that we reviewed met or exceeded the required minimum security awareness and specialized training hours. However, IRS did not always ensure that contractors received security awareness training. In March 2010, we reported that contractors had not received security awareness training within the first 10 working days and recommended that IRS address this weakness. Nevertheless, IRS indicated that it had not yet implemented this recommendation. As a result, IRS has reduced assurance that its contractors are aware of information security risks associated with their roles and responsibilities. Another key element of an information security program is conducting tests and evaluations of policies, procedures, and controls to determine whether they are effective and operating as intended. This type of oversight is a fundamental element because it demonstrates management’s commitment to the security program, reminds employees of their roles and responsibilities, and identifies areas of noncompliance and ineffectiveness. Although tests and evaluations of policies, procedures, and controls may encourage compliance with security policies, the full benefits are not achieved unless the results improve the security program through implementation of compensating or mitigating controls if needed. Consistent with FISMA, the Internal Revenue Manual states that annual security assessments will be conducted to determine if security controls are operating effectively and correctly implemented. In addition, the manual states that all IRS systems will be verified for configuration management compliance by using an approved compliance verification application. IRS has processes in place for performing tests and evaluations of policies, procedures, and controls. As part of its test and evaluations process, the agency uses NIST Special Publication 800-53A to select controls that are applicable to each system. To comply with IRS policy, all selected system controls were tested during the security assessment and authorization (SA&A) process, which occurs every 3 years or whenever there is a significant change to the system. Between authorization assessments, IRS conducts tests of a portion of the system’s controls. A third of the controls are selected for the first year after authorization, another third are selected in the second year, and all the controls are then tested again for the SA&A process in the third year. IRS refers to the annual testing process between authorization assessments as its enterprise continuous monitoring (eCM) program. Although IRS has these processes in place, they were not always effective in determining whether policies, procedures, and controls were effective and operating as intended. Controls for the systems we reviewed had been recently tested and evaluated; however, some of the tests IRS performed were limited. For example, the most recent eCM tests for the administrative accounting system did not include tests of access controls, and other tests relied heavily on reviews of plans and policies rather than actual system tests, such as testing the system’s configuration. In one case, testers concluded that encryption was in place by reviewing a diagram and interviewing key staff rather than performing system testing. Although such a methodology complies with NIST guidance for moderate risk systems, it does not provide comprehensive testing of controls for key financial and tax-related systems. Further, vulnerabilities we identified during our review were not known to IRS despite those systems being in compliance with the agency’s policies on periodic control reviews and testing. We have previously made recommendations pertaining to the limited scope of tests, as well as issues related to IRS not clearly documenting and reviewing test results; at the time of our review, these recommendations had not been implemented. As a result, IRS has limited assurance that controls over its systems are being effectively implemented and maintained. IRS also has processes in place to verify configuration management compliance; however, tools used in implementing these processes have shortcomings. In addition to tests and evaluations conducted on a yearly basis, IRS uses automated compliance verification tools to periodically test compliance with IRS’s security policies for its three major computing environments—Windows, UNIX, and mainframe. IRS stated that these tools, among others, are used as an additional control designed to partially compensate for and mitigate previously identified risks associated with outdated software and missing patches for databases, as well as shortcomings in control testing of its mainframe system. However, the UNIX tool does not test whether appropriate security patches have been applied, and the mainframe tool only tests compliance with a limited subset of the agency’s policies. Therefore, the results from these tools do not provide management with the information necessary to allow it to arrive at appropriate conclusions about the security status of these systems. As a result, IRS may not be fully aware of vulnerabilities that could adversely affect critical applications and data. A remedial action plan is a key component of an agency’s information security program. Such a plan assists agencies in identifying, assessing, prioritizing, and monitoring progress in correcting security weaknesses that are found in information systems. In its annual FISMA guidance to agencies, OMB requires agency remedial action plans, also known as plans of action and milestones, to include the resources necessary to correct identified weaknesses. According to the Internal Revenue Manual, the agency should document weaknesses found during security assessments, as well as planned, implemented, and evaluated remedial actions to correct any deficiencies. IRS policy further requires that IRS track the status of resolution of all weaknesses and verify that each weakness is corrected before closing it. IRS had a process in place to evaluate and track remedial actions and had developed remedial action plans to address previously reported weaknesses, but it did not promptly correct known vulnerabilities, and its process was not always working as intended. For example, the agency indicated that 76 of the 105 previously reported weaknesses open at the end of our prior-year audit had not yet been corrected. In addition, it did not always validate that its actions to resolve known weaknesses were effectively implemented. More specifically, of the 29 weaknesses IRS indicated were corrected, we determined that 13 (about 45 percent) had not yet been fully addressed. For example, IRS stated that it had implemented a prior recommendation to improve the scope of testing and evaluating controls, but as noted in this report, limitations on the scope of testing continue to exist. This indicates that IRS has not implemented a revised process to verify that remedial actions are fully implemented, as we previously recommended. To its credit, IRS partially implemented 6 of these 13 recommendations, but did not implement corrective actions on all systems where the weaknesses had been identified. We previously recommended that IRS implement a revised remedial action verification process to ensure actions are fully implemented, but this weakness still persists. Without an effective process to verify that remedial actions are fully implemented, IRS cannot be assured that it has corrected vulnerabilities and, consequently, may unknowingly expose itself to additional risk. Although IRS implemented numerous controls and procedures intended to protect key financial and tax-processing systems, control weaknesses continue to jeopardize the confidentiality, integrity, and availability of financial and sensitive taxpayer information. IRS made strides during the fiscal year in initiating efforts to address the internal control deficiencies that collectively constitute this material weakness. Notable among these efforts was the formation of cross-functional working groups tasked with the identification and remediation of specific at-risk control areas. In addition, the agency continued to make limited progress in correcting or mitigating previously reported weaknesses, implementing controls over key financial systems, and developing and documenting a framework for its comprehensive information security program. However, information security weaknesses existed in access and other information system controls over IRS’s financial and tax-processing systems. The financial and taxpayer information on IRS systems will remain particularly vulnerable to internal threats until the agency (1) addresses weaknesses pertaining to identification and authentication, authorization, cryptography, audit and monitoring, physical security, and configuration management, and (2) fully implements key components of a comprehensive information security program that ensures risk assessments are conducted in the current operating environment; policies and procedures are appropriately specific and effectively implemented; security plans are written to reflect the current operating environment; processes intended to test, monitor, and evaluate internal controls are appropriately detecting vulnerabilities; processes intended to check configuration management are in place; and backup procedures are working effectively. The new and unresolved deficiencies from previous audits, along with a lack of fully effective compensating and mitigating controls, impair IRS’s ability to ensure that its financial and taxpayer information is secure from internal threats, reducing its assurance that its financial statements and other financial information are fairly presented or reliable and that sensitive IRS and taxpayer information is being sufficiently safeguarded from unauthorized disclosure and modification. These deficiencies are the basis of our determination that IRS had a material weakness in internal control over financial reporting related to information security in fiscal year 2011. In addition to implementing our previous recommendations, we are recommending that the Commissioner of Internal Revenue take the following six actions to fully implement key components of the IRS comprehensive information security program: document a baseline configuration standard for tasks initiated on the document monitoring procedures that staff use to review audit logs for a key financial system; fully document monitoring procedures for the procurement system, specifically, supervisory review procedures to ensure access privileges are appropriate for segregation of duties; expand tests associated with the agency’s enterprise continuous monitoring process to include tests of access controls and system tests, such as testing the system’s configuration, where appropriate, to ensure comprehensive testing of key controls for financial and tax- related systems; implement a compliance verification application to ensure appropriate security patches have been applied in the UNIX environment; and implement a compliance verification application, or other appropriate process, to ensure configuration policies are comprehensively tested on the mainframe. We are also making 23 detailed recommendations in a separate report with limited distribution. These recommendations consist of actions to be taken to correct specific information security weaknesses related to identification and authentication, authorization, audit and monitoring, physical security, configuration management, and contingency planning. In providing written comments (reprinted in app. II) on a draft of this report, the Commissioner of Internal Revenue stated that the security and privacy of taxpayer and financial information is of the utmost importance to the agency and that IRS will provide a detailed corrective action plan addressing each of our recommendations. Further, the Commissioner stated that the integrity of IRS’s financial systems continues to be sound and that the agency has fully implemented a comprehensive information security program within the spirit and intent of NIST guidelines. However, as we noted in this report, although IRS has provided a comprehensive framework for its information security program, an underlying reason for the information security weaknesses in IRS’s financial and tax-processing systems is that it has not yet fully implemented critical components of its comprehensive information security program. For example, although IRS had a process in place to evaluate and track remedial actions and had developed remedial action plans to address previously reported weaknesses, it did not always validate that its actions to resolve known weaknesses were effectively implemented. The effective implementation of our recommendations in this report and in our previous reports will assist IRS in protecting taxpayer and financial information. This report contains recommendations to you. As you know, 31 U.S.C. § 720 requires the head of a federal agency to submit a written statement of the actions taken on our recommendations to the Senate Committee on Homeland Security and Governmental Affairs and to the House Committee on Oversight and Government Reform not later than 60 days from the date of the report and to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. Because agency personnel serve as the primary source of information on the status of recommendations, we request that the agency also provide us with a copy of the agency’s statement of action to serve as preliminary information on the status of open recommendations. We are sending copies of this report to interested congressional committees, the Secretary of the Treasury, and the Treasury Inspector General for Tax Administration. The report also is available at no charge on the GAO website at http://www.gao.gov. If you have any questions regarding this report, please contact Nancy R. Kingsbury at (202) 512-2700 or Gregory C. Wilshusen at (202) 512-6244. We can also be reached by e-mail at kingsburyn@gao.gov and wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objective of our review was to determine whether controls over key financial and tax-processing systems were effective in protecting the confidentiality, integrity, and availability of financial and sensitive taxpayer information at the Internal Revenue Service (IRS). To do this, we examined IRS information security policies, plans, and procedures; tested controls over key financial applications; and interviewed key agency officials in order to (1) assess the effectiveness of corrective actions taken by IRS to address weaknesses we previously reported, (2) determine the extent to which compensating and mitigating controls presented by IRS address previously noted areas of concern, and (3) determine whether any additional weaknesses existed. This work was performed in connection with our audit of IRS’s fiscal years 2011 and 2010 financial statements for the purpose of supporting our opinion on internal control over the preparation of those statements. To determine whether controls over key financial and tax-processing systems were effective, we considered the results of our evaluation of IRS’s actions to mitigate previously reported weaknesses, and evaluated a selection of controls that IRS asserted compensate for and mitigate known deficiencies. Additionally, we performed new audit work at the three enterprise computing centers located in Detroit, Michigan; Martinsburg, West Virginia; and Memphis, Tennessee, as well as IRS facilities in New Carrollton and Oxon Hill, Maryland; Beckley, West Virginia; and Washington, D.C. We concentrated our evaluation on threats emanating from sources internal to IRS’s computer networks. Considering systems that directly or indirectly support the processing of material transactions that are reflected in the agency’s financial statements, we focused our technical work on the general support systems that directly or indirectly support key financial and taxpayer information systems. Our evaluation was based on our Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized information; National Institute of Standards and Technology guidance; and IRS policies and procedures. We evaluated controls by testing the complexity, expiration, and policy for passwords on databases to determine if strong password management was enforced; testing the design of a key application to determine if the application’s access controls are effective; reviewing access configurations on key systems and database configurations; reviewing access control/privileges for network folders to determine if system access is assigned based on least privilege; examining IRS’s implementation of encryption to secure transmissions on its internal network; analyzing the effectiveness of IRS’s monitoring processes for its observing and analyzing physical access controls at each of the enterprise computing centers to determine if computer facilities and resources had been protected; examining the status of patching for selected databases and system components to ensure that patches are up to date; testing Domain Name Servers to determine if unnecessary services were running and if operating systems and software were current; testing servers to determine if extended stored procedures exist; evaluating the mainframe operating system controls that support the operation of databases related to revenue accounting; evaluating the controls of mainframe Started Tasks; and examining documentation to determine the extent to which IRS is performing comprehensive testing of its key network components. Using the requirements in the Federal Information Security Management Act that establish elements for an effective agencywide information security program, we reviewed and evaluated IRS’s implementation of its security program by analyzing IRS’s process for reviewing risk assessments to determine whether the assessments are up to date, documented, and approved; reviewing IRS’s policies, procedures, practices, and standards to determine whether its security management program is documented, approved, and up to date; reviewing IRS’s system security plans for specified systems to determine the extent to which the plans were reviewed, and included information as required by Office of Management and Budget Circular A-130; verifying whether employees with security-related responsibilities had received specialized training within the year; analyzing documentation to determine if the effectiveness of security controls is periodically assessed; reviewing IRS’s actions to correct weaknesses to determine if they had effectively mitigated or resolved the vulnerability or control deficiency; reviewing continuity-of-operations planning documentation for five systems to determine if such plans were appropriately documented and tested; and reviewing documented system recovery activities to determine if the system could be successfully recovered and reconstituted to its original state after a disruption or failure. In addition, we discussed with management officials and key security representatives, such as those from IRS’s Computer Security Incident Response Center and Office of Cybersecurity, as well as the three computing centers, whether information security controls were in place, adequately designed, and operating effectively. We performed our audit from April 2011 to March 2012 in accordance with U.S. generally accepted government auditing standards. We believe our audit provides a reasonable basis for our opinions and other conclusions. In addition to the individuals named above, David Hayes (assistant director), Jeffrey Knott (assistant director), Mark Canter, Sharhonda Deloach, Jennifer Franks, Mickie Gray, Nicole Jarvis, Linda Kochersberger, Lee McCracken, Kevin Metcalfe, Bradley Roach, Eugene Stevens, and Michael Stevens made key contributions to this report.
The Internal Revenue Service (IRS) has a demanding responsibility in collecting taxes, processing tax returns, and enforcing the nation’s tax laws. It relies extensively on computerized systems to support its financial and mission-related operations and on information security controls to protect financial and sensitive taxpayer information that resides on those systems. As part of its audit of IRS’s fiscal years 2011 and 2010 financial statements, GAO assessed whether controls over key financial and tax-processing systems are effective in ensuring the confidentiality, integrity, and availability of financial and sensitive taxpayer information. To do this, GAO examined IRS information security policies, plans, and procedures; tested controls over key financial applications; and interviewed key agency officials at seven sites. IRS implemented numerous controls and procedures intended to protect key financial and tax-processing systems; nevertheless, control weaknesses in these systems continue to jeopardize the confidentiality, integrity, and availability of the financial and sensitive taxpayer information processed by IRS’s systems. Specifically, the agency continues to face challenges in controlling access to its information resources. For example, it had not always (1) implemented controls for identifying and authenticating users, such as requiring users to set new passwords after a prescribed period of time; (2) appropriately restricted access to certain servers; (3) ensured that sensitive data were encrypted when transmitted; (4) audited and monitored systems to ensure that unauthorized activities would be detected; or (5) ensured management validation of access to restricted areas. In addition, unpatched and outdated software exposed IRS to known vulnerabilities, and the agency had not enforced backup procedures for a key system. An underlying reason for these weaknesses is that IRS has not fully implemented a comprehensive information security program. IRS has established a comprehensive framework for such a program, and has made strides to address control deficiencies—such as establishing working groups to identify and remediate specific at-risk control areas; however, it has not fully implemented all key components of its program. For example, IRS’s security testing and monitoring continued to not detect many of the vulnerabilities GAO identified during this audit. IRS also did not promptly correct known vulnerabilities. For example, the agency indicated that 76 of the 105 previously reported weaknesses open at the end of GAO’s prior year audit had not yet been corrected. In addition, IRS did not always validate that its actions to resolve known weaknesses were effectively implemented. Although IRS had a process in place for verifying whether each weakness had been corrected, this process was not always working as intended. Of the 29 weaknesses IRS indicated were corrected, GAO determined that 13 (about 45 percent) had not yet been fully addressed. Considered collectively, these deficiencies, both new and unresolved from previous GAO audits, along with a lack of fully effective compensating and mitigating controls, impair IRS's ability to ensure that its financial and taxpayer information is secure from internal threats. This reduces IRS's assurance that its financial statements and other financial information are fairly presented or reliable and that sensitive IRS and taxpayer information is being sufficiently safeguarded from unauthorized disclosure or modification. These deficiencies are the basis of GAO’s determination that IRS had a material weakness in internal control over financial reporting related to information security in fiscal year 2011. GAO recommends that IRS take 6 actions to fully implement key components of its comprehensive information security program. In a separate report with limited distribution, GAO is recommending that IRS take 23 specific actions to correct newly identified control weaknesses. In commenting on a draft of this report, IRS agreed to develop a detailed corrective action plan to address each recommendation.
The tens of thousands of individuals who responded to the September 11, 2001, attack on the WTC experienced the emotional trauma of the disaster and were exposed to a noxious mixture of dust, debris, smoke, and potentially toxic contaminants, such as pulverized concrete, fibrous glass, particulate matter, and asbestos. A wide variety of health effects have been experienced by responders to the WTC attack, and several federally funded programs have been created to address the health needs of these individuals. Numerous studies have documented the physical and mental health effects of the WTC attacks. Physical health effects included injuries and respiratory conditions, such as sinusitis, asthma, and a new syndrome called WTC cough, which consists of persistent coughing accompanied by severe respiratory symptoms. Almost all firefighters who responded to the attack experienced respiratory effects, including WTC cough. One study suggested that exposed firefighters on average experienced a decline in lung function equivalent to that which would be produced by 12 years of aging. A recently published study found a significantly higher risk of newly diagnosed asthma among responders that was associated with increased exposure to the WTC disaster site. Commonly reported mental health effects among responders and other affected individuals included symptoms associated with post-traumatic stress disorder (PTSD), depression, and anxiety. Behavioral health effects such as alcohol and tobacco use have also been reported. Some health effects experienced by responders have persisted or worsened over time, leading many responders to begin seeking treatment years after September 11, 2001. Clinicians involved in screening, monitoring, and treating responders have found that many responders’ conditions—both physical and psychological—have not resolved and have developed into chronic disorders that require long-term monitoring. For example, findings from a study conducted by clinicians at the NY/NJ WTC Consortium show that at the time of examination, up to 2.5 years after the start of the rescue and recovery effort, 59 percent of responders enrolled in the program were still experiencing new or worsened respiratory symptoms. Experts studying the mental health of responders found that about 2 years after the WTC attack, responders had higher rates of PTSD and other psychological conditions compared to others in similar jobs who were not WTC responders and others in the general population. Clinicians also anticipate that other health effects, such as immunological disorders and cancers, may emerge over time. There are six key programs that currently receive federal funding to provide voluntary health screening, monitoring, or treatment at no cost to responders. The six WTC health programs, shown in table 1, are (1) the FDNY WTC Medical Monitoring and Treatment Program; (2) the NY/NJ WTC Consortium, which comprises five clinical centers in the NY/NJ area; (3) the WTC Federal Responder Screening Program; (4) the WTC Health Registry; (5) Project COPE; and (6) the Police Organization Providing Peer Assistance (POPPA) program. The programs vary in aspects such as the HHS administering agency or component responsible for administering the funding; the implementing agency, component, or organization responsible for providing program services; eligibility requirements; and services. The WTC health programs that are providing screening and monitoring are tracking thousands of individuals who were affected by the WTC disaster. As of June 2007, the FDNY WTC program had screened about 14,500 responders and had conducted follow-up examinations for about 13,500 of these responders, while the NY/NJ WTC Consortium had screened about 20,000 responders and had conducted follow-up examinations for about 8,000 of these responders. Some of the responders include nonfederal responders residing outside the NYC metropolitan area. As of June 2007, the WTC Federal Responder Screening Program had screened 1,305 federal responders and referred 281 responders for employee assistance program services or specialty diagnostic services. In addition, the WTC Health Registry, a monitoring program that consists of periodic surveys of self-reported health status and related studies but does not provide in- person screening or monitoring, collected baseline health data from over 71,000 people who enrolled in the Registry. In the winter of 2006, the Registry began its first adult follow-up survey, and as of June 2007 over 36,000 individuals had completed the follow-up survey. In addition to providing medical examinations, FDNY’s WTC program and the NY/NJ WTC Consortium have collected information for use in scientific research to better understand the health effects of the WTC attack and other disasters. The WTC Health Registry is also collecting information to assess the long-term public health consequences of the disaster. Beginning in October 2001 and continuing through 2003, FDNY’s WTC program, the NY/NJ WTC Consortium, the WTC Federal Responder Screening Program, and the WTC Health Registry received federal funding to provide services to responders. This funding primarily came from appropriations to the Department of Homeland Security’s Federal Emergency Management Agency (FEMA), as part of the approximately $8.8 billion that the Congress appropriated to FEMA for response and recovery activities after the WTC disaster. FEMA entered into interagency agreements with HHS agencies to distribute the funding to the programs. For example, FEMA entered into an agreement with NIOSH to distribute $90 million appropriated in 2003 that was available for monitoring. FEMA also entered into an agreement with ASPR for ASPR to administer the WTC Federal Responder Screening Program. A $75 million appropriation to CDC in fiscal year 2006 for purposes related to the WTC attack resulted in additional funding for the monitoring activities of the FDNY WTC program, NY/NJ WTC Consortium, and the Registry. The $75 million appropriation to CDC in fiscal year 2006 also provided funds that were awarded to the FDNY WTC program, the NY/NJ WTC Consortium, Project COPE, and the POPPA program for treatment services for responders. An emergency supplemental appropriation to CDC in May 2007 included an additional $50 million to carry out the same activities provided for in the $75 million appropriation made in fiscal year 2006. The President’s proposed fiscal year 2008 budget for HHS includes $25 million for treatment of WTC-related illnesses for responders. In February 2006, the Secretary of HHS designated the Director of NIOSH to take the lead in ensuring that the WTC health programs are well coordinated, and in September 2006 the Secretary established a WTC Task Force to advise him on federal policies and funding issues related to responders’ health conditions. The chair of the task force is HHS’s Assistant Secretary for Health, and the vice chair is the Director of NIOSH. The task force reported to the Secretary of HHS in early April 2007. HHS’s WTC Federal Responder Screening Program has had difficulties ensuring the uninterrupted availability of services for federal responders. First, the provision of screening examinations has been intermittent. (See fig. 1.) After resuming screening examinations in December 2005 and conducting them for about a year, HHS again placed the program on hold and suspended scheduling of screening examinations for responders from January 2007 to May 2007. This interruption in service occurred because there was a change in the administration of the WTC Federal Responder Screening Program, and certain interagency agreements were not established in time to keep the program fully operational. In late December 2006, ASPR and NIOSH signed an interagency agreement giving NIOSH $2.1 million to administer the WTC Federal Responder Screening Program. Subsequently, NIOSH and FOH needed to sign a new interagency agreement to allow FOH to continue to be reimbursed for providing screening examinations. It took several months for the agreement between NIOSH and FOH to be negotiated and approved, and scheduling of screening examinations did not resume until May 2007. Second, the program’s provision of specialty diagnostic services has also been intermittent. After initial screening examinations, responders often need further diagnostic services by ear, nose, and throat doctors; cardiologists; and pulmonologists; and FOH had been referring responders to these specialists and paying for the services. However, the program stopped scheduling and paying for these specialty diagnostic services in April 2006 because the program’s contract with a new provider network did not cover these services. In March 2007, FOH modified its contract with the provider network and resumed scheduling and paying for specialty diagnostic services for federal responders. In July 2007 we reported that NIOSH was considering expanding the WTC Federal Responder Screening Program to include monitoring examinations—follow-up physical and mental health examinations—and was assessing options for funding and delivering these services. If federal responders do not receive this type of monitoring, health conditions that arise later may not be diagnosed and treated, and knowledge of the health effects of the WTC disaster may be incomplete. In February 2007, NIOSH sent a letter to FEMA, which provides the funding for the program, asking whether the funding could be used to support monitoring in addition to the one-time screening currently offered. A NIOSH official told us that as of August 2007 the agency had not received a response from FEMA. NIOSH officials told us that if FEMA did not agree to pay for monitoring of federal responders, NIOSH would consider using other funding. According to a NIOSH official, if FEMA or NIOSH agrees to pay for monitoring of federal responders, this service would be provided by FOH or one of the other WTC health programs. NIOSH has not ensured the availability of screening and monitoring services for nonfederal responders residing outside the NYC metropolitan area, although it recently took steps toward expanding the availability of these services. Initially, NIOSH made two efforts to provide screening and monitoring services for these responders, the exact number of which is unknown. The first effort began in late 2002 when NIOSH awarded a contract for about $306,000 to the Mount Sinai School of Medicine to provide screening services for nonfederal responders residing outside the NYC metropolitan area and directed it to establish a subcontract with AOEC. AOEC then subcontracted with 32 of its member clinics across the country to provide screening services. From February 2003 to July 2004, the 32 AOEC member clinics screened 588 nonfederal responders nationwide. AOEC experienced challenges in providing these screening services. For example, many nonfederal responders did not enroll in the program because they did not live near an AOEC clinic, and the administration of the program required substantial coordination among AOEC, AOEC member clinics, and Mount Sinai. Mount Sinai’s subcontract with AOEC ended in July 2004, and from August 2004 until June 2005 NIOSH did not fund any organization to provide services to nonfederal responders outside the NYC metropolitan area. During this period, NIOSH focused on providing screening and monitoring services for nonfederal responders in the NYC metropolitan area. In June 2005, NIOSH began its second effort by awarding $776,000 to the Mount Sinai School of Medicine Data and Coordination Center (DCC) to provide both screening and monitoring services for nonfederal responders residing outside the NYC metropolitan area. In June 2006, NIOSH awarded an additional $788,000 to DCC to provide screening and monitoring services for these responders. NIOSH officials told us that they assigned DCC the task of providing screening and monitoring services to nonfederal responders outside the NYC metropolitan area because the task was consistent with DCC’s responsibilities for the NY/NJ WTC Consortium, which include data monitoring and coordination. DCC, however, had difficulty establishing a network of providers that could serve nonfederal responders residing throughout the country—ultimately contracting with only 10 clinics in seven states to provide screening and monitoring services. DCC officials said that as of June 2007 the 10 clinics were monitoring 180 responders. In early 2006, NIOSH began exploring how to establish a national program that would expand the network of providers to provide screening and monitoring services, as well as treatment services, for nonfederal responders residing outside the NYC metropolitan area. According to NIOSH, there have been several challenges involved in expanding a network of providers to screen and monitor nonfederal responders nationwide. These include establishing contracts with clinics that have the occupational health expertise to provide services nationwide, establishing patient data transfer systems that comply with applicable privacy laws, navigating the institutional review board process for a large provider network, and establishing payment systems with clinics participating in a national network of providers. On March 15, 2007, NIOSH issued a formal request for information from organizations that have an interest in and the capability of developing a national program for responders residing outside the NYC metropolitan area. In this request, NIOSH described the scope of a national program as offering screening, monitoring, and treatment services to about 3,000 nonfederal responders through a national network of occupational health facilities. NIOSH also specified that the program’s facilities should be located within reasonable driving distance to responders and that participating facilities must provide copies of examination records to DCC. In May 2007, NIOSH approved a request from DCC to redirect about $125,000 from the June 2006 award to establish a contract with a company to provide screening and monitoring services for nonfederal responders residing outside the NYC metropolitan area. Subsequently, DCC contracted with QTC Management, Inc., one of the four organizations that had responded to NIOSH’s request for information. DCC’s contract with QTC does not include treatment services, and NIOSH officials are still exploring how to provide and pay for treatment services for nonfederal responders residing outside the NYC metropolitan area. QTC has a network of providers in all 50 states and the District of Columbia and can use internal medicine and occupational medicine doctors in its network to provide these services. In addition, DCC and QTC have agreed that QTC will identify and subcontract with providers outside of its network to screen and monitor nonfederal responders who do not reside within 25 miles of a QTC provider. In June 2007, NIOSH awarded $800,600 to DCC for coordinating the provision of screening and monitoring examinations, and QTC will receive a portion of this award from DCC to provide about 1,000 screening and monitoring examinations through May 2008. According to a NIOSH official, QTC’s providers have begun conducting screening examinations, and by the end of August 2007, 18 nonfederal responders had completed screening examinations, and 33 others had been scheduled. In fall 2006, NIOSH awarded and set aside funds totaling $51 million from its $75 million appropriation for four WTC health programs in the NYC metropolitan area to provide treatment services to responders enrolled in these programs. Of the $51 million, NIOSH awarded about $44 million for outpatient services to the FDNY WTC program, the NY/NJ WTC Consortium, Project COPE, and the POPPA program. NIOSH made the largest awards to the two programs from which almost all responders receive medical services, the FDNY WTC program and NY/NJ WTC Consortium (see table 2). In July 2007 we reported that officials from the FDNY WTC program and the NY/NJ WTC Consortium expected that their awards for outpatient treatment would be spent by the end of fiscal year 2007. In addition to the $44 million it awarded for outpatient services, NIOSH set aside about $7 million for the FDNY WTC program and NY/NJ WTC Consortium to pay for responders’ WTC-related inpatient hospital care as needed. The FDNY WTC program and NY/NJ WTC Consortium used their awards from NIOSH to continue providing treatment services to responders and to expand the scope of available treatment services. Before NIOSH made its awards for treatment services, the treatment services provided by the two programs were supported by funding from private philanthropies and other organizations. According to officials of the NY/NJ WTC Consortium, this funding was sufficient to provide only outpatient care and partial coverage for prescription medications. The two programs used NIOSH’s awards to continue to provide outpatient services to responders, such as treatment for gastrointestinal reflux disease, upper and lower respiratory disorders, and mental health conditions. They also expanded the scope of their programs by offering responders full coverage for their prescription medications for the first time. A NIOSH official told us that some of the commonly experienced WTC conditions, such as upper airway conditions, gastrointestinal disorders, and mental health disorders, are frequently treated with medications that can be costly and may be prescribed for an extended period of time. According to an FDNY WTC program official, prescription medications are now the largest component of the program’s treatment budget. The FDNY WTC program and NY/NJ WTC Consortium also expanded the scope of their programs by paying for inpatient hospital care for the first time, using funds from the $7 million that NIOSH had set aside for this purpose. According to a NIOSH official, NIOSH pays for hospitalizations that have been approved by the medical directors of the FDNY WTC program and NY/NJ WTC Consortium through awards to the programs from the funds NIOSH set aside for this purpose. By August 31, 2007, federal funds had been used to support 34 hospitalizations of responders, 28 of which were referred by the NY/NJ WTC Consortium’s Mount Sinai clinic, 5 by the FDNY WTC program, and 1 by the NY/NJ WTC Consortium’s CUNY Queens College program. Responders have received inpatient hospital care to treat, for example, asthma, pulmonary fibrosis, and severe cases of depression or PTSD. According to a NIOSH official, one responder is now a candidate for lung transplantation and if this procedure is performed, it will be covered by federal funds. If funds set aside for hospital care are not completely used by the end of fiscal year 2007, he said they could be carried over into fiscal year 2008 for this purpose or used for outpatient services. After receiving NIOSH’s funding for treatment services in fall 2006, the NY/NJ WTC Consortium ended its efforts to obtain reimbursement from health insurance held by responders with coverage. Consortium officials told us that efforts to bill insurance companies involved a heavy administrative burden and were frequently unsuccessful, in part because the insurance carriers typically denied coverage for work-related health conditions on the grounds that such conditions should be covered by state workers’ compensation programs. However, according to officials from the NY/NJ WTC Consortium, responders trying to obtain workers’ compensation coverage routinely experienced administrative hurdles and significant delays, some lasting several years. Moreover, according to these program officials, the majority of responders enrolled in the program either had limited or no health insurance coverage. According to a labor official, responders who carried out cleanup services after the WTC attack often did not have health insurance, and responders who were construction workers often lost their health insurance when they became too ill to work the number of days each quarter or year required to maintain eligibility for insurance coverage. According to a NIOSH official, although the agency had not received authorization as of August 30, 2007, to use the $50 million emergency supplemental appropriation made to CDC in May 2007, NIOSH was formulating plans for use of these funds to support the WTC treatment programs in fiscal year 2008. Officials involved in the WTC health programs implemented by government agencies or private organizations—as well as officials from the federal administering agencies—derived lessons from their experiences that could help with the design of such programs in the future. Lessons include the need to quickly identify and contact responders and others affected by a disaster, the value of a centrally coordinated approach for assessing individuals’ health, and the importance of addressing both physical and mental health effects. Officials involved in WTC monitoring efforts discussed with us the importance of quickly identifying and contacting responders and others affected by a disaster. They said that potential monitoring program participants could become more difficult to locate as time passed. In addition, potential participants’ ability to recall the events of a disaster may decrease over time, making it more difficult to collect accurate information about their experiences and health. However, the time it takes to design, fund, approve, and implement monitoring programs can lead to delays in contacting the people who were affected. For example, the WTC Health Registry received funding in July 2002 but did not begin collecting data until September 2003—2 years after the disaster. From July 2002 through September 2003, the program’s activities included developing the Registry protocol, testing the questionnaire, and obtaining approval from institutional review boards. Our work on Hurricane Katrina found that no one was assigned responsibility for collecting data on the total number of response and recovery workers deployed to the Gulf and no agency collected it. Furthermore, officials from the WTC health programs told us that health monitoring for future disasters could benefit from additional centrally coordinated planning. Such planning could facilitate the collection of compatible data among monitoring efforts, to the extent that this is appropriate. Collecting compatible data could allow information from different programs to be integrated and contribute to improved data analysis and more useful research. In addition, centrally coordinated planning could help officials determine agency roles so important aspects of disaster response efforts are not overlooked. For example, as we reported in March 2007, federal agencies involved in the response to the Hurricane Katrina disaster disagreed over which agency should fund the medical monitoring of responders. We recommended that the relevant federal agencies involved clearly define their roles and resolve this disagreement so that the need may be met in future disasters. In general, there has been no systematic monitoring of the health of responders to Hurricane Katrina. Officials also told us that efforts to address health effects should be comprehensive—encompassing responders’ physical and mental health. Officials from the NY/NJ WTC Consortium told us that the initial planning for their program had focused primarily on screening participants’ physical health and that they originally budgeted only for basic mental health screening. Subsequently, they recognized a need for more in-depth mental health screening, including greater participation of mental health professionals, but the program’s federal funding was not sufficient to cover such screening. By collaborating with the Mount Sinai School of Medicine Department of Psychiatry, program officials were able to obtain philanthropic funding to develop a more comprehensive mental health questionnaire, provide in-person psychiatric screening, and, when necessary, provide more extensive evaluations. Our work on Hurricane Katrina found problems with the provision of mental health services during the response to the disaster. Not all responders who needed mental health services received them. For example, it was difficult to get mental health counselors to go to the base camps where workers lived during the response and to get counselors to provide services during off-hours to workers who did not have standard work schedules. Screening and monitoring the health of the people who responded to the September 11, 2001, attack on the World Trade Center are critical for identifying health effects already experienced by responders or those that may emerge in the future. In addition, collecting and analyzing information produced by screening and monitoring responders can give health care providers information that could help them better diagnose and treat responders and others who experience similar health effects. While some groups of responders are eligible for screening and follow-up physical and mental health examinations through the federally funded WTC health programs, other groups of responders are not eligible for comparable services or may not always find these services available. Federal responders have been eligible only for the initial screening examination provided through the WTC Federal Responder Screening Program. NIOSH, the administrator of the program, has been considering expanding the program to include monitoring but has not done so. In addition, many responders who reside outside the NYC metropolitan area have not been able to obtain screening and monitoring services because available services are too distant. Moreover, HHS has repeatedly interrupted the programs it established for federal responders and nonfederal responders outside of NYC, resulting in periods when no services were available to them. HHS continues to fund and coordinate the WTC health programs and has key federal responsibility for ensuring the availability of services to responders. HHS and its agencies have recently taken steps to move toward providing screening and monitoring services to federal responders and to nonfederal responders living outside of the NYC area. However, these efforts are not complete, and the stop-and-start history of the department’s efforts to serve these groups does not provide assurance that the latest efforts to extend screening and monitoring services to these responders will be successful and will be sustained over time. Therefore we recommended in July 2007 that the Secretary of HHS take expeditious action to ensure that health screening and monitoring services are available to all people who responded to the attack on the WTC, regardless of who their employer was or where they reside. As of September 2007 the department has not responded to this recommendation. Finally, important lessons have been learned from the WTC disaster. These include the need to quickly identify and contact responders and others affected by a disaster, the value of a centrally coordinated approach for assessing individuals’ health, and the importance of addressing both physical and mental health effects. Consideration of these lessons by federal agencies is important in planning for the response to future disasters. Mr. Chairman, this completes my prepared remarks. I would be happy to respond to any questions you or other members of the committee may have at this time. For further information about this testimony, please contact Cynthia A. Bascetta at (202) 512-7114 or bascettac@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Helene F. Toiv, Assistant Director; Hernan Bozzolo; Frederick Caison; Anne Dievler; and Roseanne Price made key contributions to this statement. September 11: HHS Needs to Ensure the Availability of Health Screening and Monitoring for All Responders. GAO-07-892. Washington, D.C.: July 23, 2007. Disaster Preparedness: Better Planning Would Improve OSHA’s Efforts to Protect Workers’ Safety and Health in Disasters. GAO-07-193. Washington, D.C.: March 28, 2007. September 11: HHS Has Screened Additional Federal Responders for World Trade Center Health Effects, but Plans for Awarding Funds for Treatment Are Incomplete. GAO-06-1092T. Washington, D.C.: September 8, 2006. September 11: Monitoring of World Trade Center Health Effects Has Progressed, but Program for Federal Responders Lags Behind. GAO-06-481T. Washington, D.C.: February 28, 2006. September 11: Monitoring of World Trade Center Health Effects Has Progressed, but Not for Federal Responders. GAO-05-1020T. Washington, D.C.: September 10, 2005. September 11: Health Effects in the Aftermath of the World Trade Center Attack. GAO-04-1068T. Washington, D.C.: September 8, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Six years after the attack on the World Trade Center (WTC), concerns persist about health effects experienced by WTC responders and the availability of health care services for those affected. Several federally funded programs provide screening, monitoring, or treatment services to responders. GAO has previously reported on the progress made and implementation problems faced by these WTC health programs, as well as lessons learned from the WTC disaster. This testimony is based on previous GAO work, primarily September 11: HHS Needs to Ensure the Availability of Health Screening and Monitoring for All Responders ( GAO-07-892 , July 23, 2007). This testimony discusses (1) status of services provided by the Department of Health and Human Services' (HHS) WTC Federal Responder Screening Program, (2) efforts by the Centers for Disease Control and Prevention's National Institute for Occupational Safety and Health (NIOSH) to provide services for nonfederal responders residing outside the New York City (NYC) area, and (3) lessons learned from WTC health programs. For the July 2007 report, GAO reviewed program documents and interviewed HHS officials, grantees, and others. In August and September 2007, GAO updated selected information in preparing this testimony. In July 2007, following a reexamination of the status of the WTC health programs, GAO recommended that the Secretary of HHS take expeditious action to ensure that health screening and monitoring services are available to all people who responded to the WTC attack, regardless of who their employer was or where they reside. As of September 2007 the department has not responded to this recommendation. As GAO reported in July 2007, HHS's WTC Federal Responder Screening Program has had difficulties ensuring the uninterrupted availability of screening services for federal responders. From January 2007 to May 2007, the program stopped scheduling screening examinations because there was a change in the program's administration and certain interagency agreements were not established in time to keep the program fully operational. From April 2006 to March 2007, the program stopped scheduling and paying for specialty diagnostic services associated with screening. NIOSH, the administrator of the program, has been considering expanding the program to include monitoring--that is, follow-up physical and mental health examinations--but has not done so. If federal responders do not receive monitoring, health conditions that arise later may not be diagnosed and treated, and knowledge of the health effects of the WTC disaster may be incomplete. NIOSH has not ensured the availability of screening and monitoring services for nonfederal responders residing outside the NYC area, although it recently took steps toward expanding the availability of these services. In late 2002, NIOSH arranged for a network of occupational health clinics to provide screening services. This effort ended in July 2004, and until June 2005 NIOSH did not fund screening or monitoring services for nonfederal responders outside the NYC area. In June 2005, NIOSH funded the Mount Sinai School of Medicine Data and Coordination Center (DCC) to provide screening and monitoring services; however, DCC had difficulty establishing a nationwide network of providers and contracted with only 10 clinics in seven states. In 2006, NIOSH began to explore other options for providing these services, and in May 2007 it took steps toward expanding the provider network. However, as of September 2007 these efforts are incomplete. Lessons have been learned from the WTC health programs that could assist in the event of a future disaster. Lessons include the need to quickly identify and contact responders and others affected by a disaster, the value of a centrally coordinated approach for assessing individuals' health, and the importance of addressing both physical and mental health effects. Consideration of these lessons by federal agencies is important in planning for the response to future disasters.
Hedge funds typically are organized as limited partnerships or limited liability companies, and are structured and operated in a manner that enables the fund and its advisers to qualify for exemptions from certain federal securities laws and regulations that apply to other investment pools, such as mutual funds. In addition, hedge funds operate to qualify for exemptions from certain registration and disclosure requirements of federal securities laws (including the Securities Act of 1933 and the Securities Exchange Act of 1934). For example, hedge funds must refrain from advertising to the general public and can solicit participation in the fund from only certain large institutions and wealthy individuals. Although certain advisers may be exempt from registration requirements, they remain subject to anti-fraud (including insider trading), anti- manipulation, and large trading position reporting rules. For example, upon acquiring a significant ownership position in a particular publicly traded security or holding a certain level of futures or options positions, a hedge fund adviser may be required to file a report disclosing the adviser’s or hedge fund’s holdings with SEC or positions with CFTC, as applicable. Hedge funds have significant business relationships with the largest regulated commercial and investment banks. Hedge funds act as trading counterparties for a wide range of over-the-counter (OTC) derivatives and other financing transactions. They also act as clients through their purchase of clearing and other services and as borrowers through their use of margin loans from prime brokers. Hedge funds generally are not restricted by regulation in their choice of investment strategies, as are mutual funds. They may invest in a wide variety of financial instruments, including stocks and bonds, currencies, OTC derivatives, futures contracts, and other assets. Most hedge fund trading strategies are dynamic, often changing rapidly to adjust to fluid market conditions. To seek to generate “absolute returns” (performance that exceeds and has low correlation with stock and bond markets returns), advisers may use leverage, short selling, and a variety of sophisticated investment strategies and techniques. However, while hedge funds frequently borrow or trade in products with leverage to magnify their returns, leverage also can increase their losses. Appendix III provides examples of investment strategies used by hedge funds. Advisers of hedge funds commonly receive a fixed compensation of 2 percent of assets under management plus 20 percent of the fund’s annual profits. Some fund advisers can command higher fees. Since this compensation scheme rewards hedge fund advisers for exceptional performance, but does not directly penalize them for inferior performance, advisers could be tempted to pursue excessively risky investment strategies that might produce exceptional returns. To discourage excessive risk taking, investors generally insist that the advisers and principals also personally invest in their funds to more closely align principals’ interests with those of fund investors. SEC’s ability to directly oversee hedge fund advisers is limited to those that are required to register or voluntarily register with SEC as investment advisers. Recent examinations of registered advisers raised concerns in areas such as disclosure, reporting and filing, personal trading, and asset valuation. Also, under a program established in 2004, SEC oversees, on a consolidated basis, some of the largest internationally active securities firms that engage in significant hedge fund-related activities. CFTC directly oversees registered CPOs and CTAs (some of which may be hedge fund advisers) through market surveillance, regulatory compliance surveillance, an examination program delegated to NFA, and enforcement actions. The banking regulators also monitor hedge fund-related activities at the institutions under their jurisdiction. For instance, in recent years regulators conducted targeted examinations and horizontal reviews that have focused on areas such as stress testing, leverage, liquidity, due diligence, and margining practices as well as overall credit risk management. Registered hedge fund advisers are subject to the same disclosure requirements as all other registered investment advisers. These advisers must provide current information to both SEC and investors about their business practices and disciplinary history. Advisers also must maintain required books and records, and are subject to periodic examinations by SEC staff. Meanwhile, hedge funds, like other investors in publicly traded securities, are subject to various regulatory reporting requirements. For example, upon acquiring a 5 percent beneficial ownership position of a particular publicly traded security, a hedge fund may be required to file a report disclosing its holdings with SEC. Also, any institutional investment adviser with investment discretion over accounts holding certain publicly traded equity securities valued at $100 million or more must file on a quarterly a report with SEC. SEC also plans to propose new rule making that would require a registered adviser sponsoring a hedge fund to identify and provide some basic information to SEC about the hedge fund’s gatekeepers, i.e., auditor, prime broker, custodian, and administrator. In December 2004, SEC adopted an amendment to Rule 203(b)(3)-1, which had the effect of requiring certain hedge fund advisers that previously enjoyed the private adviser exemption from registration to register with SEC as investment advisers. In June 2006, a federal court vacated the 2004 amendment to Rule 203(b)(3)-1. According to SEC, when the rule was in effect (from February 1, 2006, through August 21, 2006), SEC was better able to identify hedge fund advisers. In August 2006, SEC estimated that 2,534 advisers that sponsored at least one hedge fund were registered with the agency. Since August 2006, SEC’s ability to identify an adviser that manages a hedge fund has been further limited due to changes in filing requirements and to advisers that chose to retain registered status. As of April 2007, 488, or about 19 percent of the 2,534 advisers, had withdrawn their registrations. At the same time, 76 new registrants were added and some others changed their filing status, leaving an estimated 1,991 hedge fund advisers registered. While the list of registered hedge fund advisers is not all-inclusive, many of the largest hedge fund advisers—including 49 of the largest 78 U.S. hedge fund advisers—are registered. These 49 hedge fund advisers account for approximately $492 billion of assets under management, or about 33 percent of the estimated $1.5 trillion in hedge fund assets under management in the United States. In fiscal year 2006, SEC took additional steps to oversee hedge fund advisers by creating an examination module specifically for hedge fund advisers and providing training for examiners in hedge fund-related topics. The new examination module outlines how the examination of a hedge fund adviser generally begins with an analysis of the adviser’s compliance program and the work of its chief compliance officer and uses a control scorecard as a guide. As part of this review of compliance programs, examiners inspect the typical activities of advisers and are expected to obtain a clear understanding of all activities of affiliates and how these activities may affect or conflict with those of the hedge fund adviser being examined. Examiners are to focus primarily on the following activities during their examinations of hedge fund advisers: brokerage arrangements and trading; personal trading by access persons; valuation of positions and calculations of net asset value; safety of clients’ and funds’ assets; fund investors and capital introduction; violations of domestic or foreign laws that may directly harm fund investors or other market participants, or cause harm to prime brokers; books and records, fund financial statements, and investor reporting; chief compliance officer, compliance culture, and program; and boards of directors for offshore funds (fiduciary duties to shareholders of the hedge funds and consistent disclosure to its investors). In preparation for the registration of hedge fund advisers and because SEC does not have a dedicated group of examiners that focus on hedge funds, SEC and hedge fund industry officials noted the need for more experience and ongoing training of examiners on hedge funds’ investment strategies and complex financial instruments. SEC developed a specialized training program to better familiarize its examiners with the operation of hedge funds to improve effectiveness of examinations of hedge fund advisers. In that regard, from October 2005 through October 2006, SEC held about 20 examiner training sessions on hedge fund-related topics. Industry participants were instructors in many of these sessions. These sessions covered topics such as hedge fund structure, hedge fund investment vehicles, identification and examination of conflicts of interests at hedge fund advisers, risk management, prime brokerage, valuation, current and future regulation, examination issues, and investment risk. SEC continues to offer hedge fund training to examiners and other staff on an ongoing voluntary basis. SEC uses a risk-based examination approach to select investment advisers for inspections. Under this approach, higher-risk investment advisers are examined every 3 years. One of the variables in determining risk level is the amount of assets under management. SEC officials told us that most hedge funds, even the larger ones, do not meet the dollar threshold to be automatically considered higher-risk. As part of the overall risk-based approach for conducting oversight of investment advisers, SEC uses a database application called Risk Assessment Database for Analysis and Reporting (RADAR), to identify the highest-risk areas designated by examiners and to develop and recommend regulatory responses to address these higher-risk areas. In fiscal year 2006, RADAR identified a number of hedge fund-related risk areas, which although not exclusive to hedge funds require additional regulatory attention, including the following: soft dollars (e.g., paying for a hedge fund’s office space without disclosing market manipulation (e.g., the dissemination of false information to inflate the price of a stock); hedge fund custody and misappropriation (e.g., theft of hedge fund assets by its advisers); complexity of hedge fund products and suitability (e.g., inadequacy of policies and procedures to assess the complexity of financial instruments and the suitability of products for investors); prime brokerage relationships (e.g., potential conflicts of interest where prime brokers give hedge fund clients—who often pay large dollar amounts of commissions—priority over non-hedge fund clients regarding access to information/research); performance fees (e.g., incorrect calculation of performance fees); hedge fund valuation (e.g., inadequate policies and procedures to ensure that asset valuations are accurate); fund of funds’ conflicts of interest (e.g., conflicts of interest between fund of funds advisers and their recommendation to a fund of hedge fund to invest in certain hedge funds); insider trading (e.g., trading on nonpublic information); and hedge fund suitability (e.g., inadequate policies and procedures to ensure the financial qualification of investors). According to SEC officials, they plan to address these risks by primarily focusing on these areas during subsequent examinations. As part of its fiscal year 2006 routine inspection program, SEC conducted examinations of 1,346 registered investment advisers, of which 321 were believed to have involved hedge fund advisers. SEC used its new hedge fund module, along with other modules as appropriate, to conduct the 321 examinations, which included 5 of the largest 78 U.S. hedge funds. According to SEC officials, the 321 hedge fund advisers’ examinations found that these advisers had the greatest deficiencies in the following areas: (1) information disclosures, reporting, and filing—e.g., private placement memorandum was outdated; (2) personal trading—e.g., quarterly reports were not filed or filed late for personal trading accounts; and (3) compliance rule—e.g., policies and procedures were inadequate to address compliance risks. Examiners also cited concerns with performance advertising and marketing of portfolio management, brokerage arrangement and execution, information processing and protection, safety of clients’ funds and assets, pricing of clients’ portfolios, trade allocations, and anti-money laundering. In our review of 9 of the 321 examinations of hedge fund advisers, we found that examiners cited deficiencies in 8 of these examinations. Deficiencies found included all of the above mentioned categories except for trade allocations. For example, examiners identified concerns in 5 of the examinations regarding disclosures and in one of the examinations, for instance, the hedge fund adviser’s marketing package did not disclose any material conditions, objectives, or investment strategies used to obtain the performance result portrayed. In another examination, the hedge fund adviser failed to adequately disclose to investors that a conflict of interest may be present when the hedge fund adviser places transactions through broker-dealers who have invested in the hedge fund. According to SEC officials, 294 (or approximately 92 percent) of the 321 hedge fund advisers examined received deficiency letters. Some 292 of them provided satisfactory responses to SEC that they had taken or would take appropriate corrective actions. Such actions can include advisers implementing policies and procedures to address deficiencies. Those hedge fund advisers that do not take or propose to take corrective actions for a material deficiency may be referred to SEC’s Division of Enforcement (Enforcement) for enforcement actions. According to SEC, 23 of the 321 examinations resulted in enforcement referrals, and most of these referrals regarded situations in which the adviser appeared to have engaged in fraud that harmed its clients. As part of its oversight activities, SEC has brought a number of enforcement actions involving hedge fund advisers. Sources of information that led to SEC enforcement cases included examinations, self-regulatory organizations, referrals, and tips. From October 1, 2001, to June 12, 2007, SEC brought a total of 3,937 enforcement cases, of which 113, or 2.9 percent, were hedge fund-related. These cases involve hedge fund advisers who misappropriated fund assets, engaged in insider trading, misrepresented portfolio performance, falsified their experience and credentials, or lied about past returns. As an example, in 2006, SEC brought a case against a hedge fund adviser and its former portfolio manager and charged them with making investment decisions based on nonpublic insider information that certain public offerings were about to be publicly announced. The hedge fund adviser agreed to pay approximately $5.7 million in disgorgement, prejudgment interest, and civil money penalty, and the former portfolio manager agreed to pay a civil money penalty of $110,000 and be barred from associating with an investment adviser for 3 years. SEC also has brought cases for inaccurate disclosure of trading strategies, undisclosed preferential treatment of hedge fund clients at the expense of other clients, market manipulation, insider trading, illegal short selling, and improper valuation of assets. During the same period, nine insider trading cases were brought against hedge fund advisers, of which five have been settled and four remain in litigation. The five settled cases resulted in disgorgements ranging from $2,736 to $7.05 million, civil penalties ranging from $8,208 to $4.7 million, a suspension, and bars from the securities industry. According to an SEC enforcement official, SEC recognized that hedge funds were becoming a prominent force in the financial industry, and in anticipation that certain hedge fund advisers would be required to register with SEC as investment advisers when the now vacated amendment to Rule 203(b)(3)-1 was under consideration, SEC created a hedge fund working group composed primarily of Enforcement and Office of Compliance Inspections and Examinations staff and participants from other divisions. The goals of this group are to enhance SEC’s staff knowledge about the hedge fund industry to aid in its oversight role and coordinate and strengthen the agency’s efforts to combat insider trading at hedge funds. Currently, SEC is conducting investigations into potential insider trading by hedge fund advisers. SEC also conducts oversight over hedge fund activities through the supervision of the regulated securities firms that transact business with hedge funds as brokers, creditors, and counterparties. SEC staff oversees some large, internationally active U.S. securities firms with significant hedge fund activities through its Consolidated Supervised Entity program (CSE), which was established in June 2004. Between December 2004 and November 2005, five large securities firms have elected to become CSEs. The CSE program consists of four components: (1) a review of the firm’s application to become a CSE; (2) a review of monthly, quarterly, and annual filings, such as consolidated financial statements and risk reports, substantially similar to those provided to the firm’s senior management; (3) monthly meetings with senior management (senior risk managers and financial controllers) at the holding company level to review financial and risk reports and share written results of these meetings among staff and commissioners; and (4) an examination of books and records of the ultimate holding company, the broker-dealer, and material affiliates. SEC relies on a number of regulatory tools, including margin, capital, and reporting requirements to oversee CSEs. Margin rules within the broker- dealer help protect against losses resulting from defaults by requiring its hedge fund clients to provide collateral in amounts that depend on the risk of the particular position and help maintain safety and soundness of their firms. Capital requirements are minimum regulatory required levels of capital that a firm must hold against its risk-taking activities. These requirements can help a firm withstand the failure of a counterparty or a period of market or systemic stress. One aspect of the CSE program involves how the securities firms manage various risk exposures, including those from hedge fund activities such as providing prime brokerage service and acting as creditors and counterparties through financing and OTC derivatives trade transactions. These large integrated financial institutions may be exposed to various risks from hedge fund activities such as providing prime brokerage services through a registered broker-dealer, acting as creditors and counterparties, or owning a hedge fund. For example, the recent problems at two hedge funds sponsored by Bear Stearns Asset Management that invested in financial instruments tied to subprime mortgages (where Bear Stearns ultimately provided some secured financing to the funds) highlight such risks. As part of the application process that took place from November 2004 through January 2006, SEC examined the five securities firms’ risk management systems (market, credit, liquidity, operational, and legal and compliance), internal controls, and capital adequacy calculations and continues to do so on an ongoing basis. SEC did not target hedge fund activities specifically within the scope of the five application examinations, because hedge funds were not products or activities judged to pose the greatest risks to the firms. Our review of the five CSEs’ application examinations found that examination findings generally were related to firms’ documentation of compliance with rules and requirements. SEC shared the findings with the firms and has monitored the firms’ implementation of its recommendations. An SEC official said that those issues have been resolved, but more recently, SEC’s examinations of three of the firms identified a number of issues related to capital computations, operational controls, and risk management. Examination staff are addressing these issues with the firms. SEC monitors CSEs continuously for financial and operational weakness that might place regulated entities within the group or the broader financial system at risk. According to an SEC official, the CSE program allows SEC to conduct reviews across the five firms (i.e., cross-firm reviews) to gain insights into business areas that are material by risk or balance sheet measures, rapidly growing, pose particular challenges in implementing the Basel regulatory risk-based capital regime, or have some combination of these characteristics. For example, in fiscal year 2006, SEC conducted two cross-firm reviews related to leveraged lending and hedge fund derivatives, and in fiscal year 2007, SEC conducted two cross- firm reviews related to securitization and private equity and principal investments. According to the official, SEC generally found that the firms were in regulatory compliance, but there were areas where capital computation methodology and risk management practices can be improved. For example, four firms modified their capital computations as a result of feedback from the leveraged lending project. For each review, SEC produced a report that described the business model, related risk management, and capital treatment to each review area, and provided feedback to each firm on where it stood among the peer firms. Although CFTC does not specifically target hedge funds, through its general market and financial supervisory activities, it can provide oversight of persons registered as CPOs and CTAs that operate or advise hedge funds that trade in the futures markets. As part of its market surveillance program, CFTC collects information on market participants, regardless of their registration status, to monitor their activities and trading practices. In particular, traders are required to report their futures and options positions when a CFTC-specified level is reached in a certain contract market and CFTC electronically collects these data through its Large Trader Reporting System (LTRS). CFTC also uses the futures and options positions information reported by traders through the LTRS as part of its monitoring of the potential financial exposure of traders to clearing firms, and of clearing firms to derivatives clearing organizations. CFTC collects position information from exchanges, clearing members, futures commission merchants (FCM), and foreign brokers and other traders—including hedge funds—about firm and customer accounts in an attempt to detect and deter manipulation. Customers, including hedge funds, are required to maintain margin on deposit with their FCMs to cover losses that might be incurred due to price changes. FCMs also are required to maintain CFTC-imposed minimum capital requirements in order to meet their financial obligations. Such financial safeguards are put in place to mitigate the potential spillover effect to the broader market resulting from the failure of a customer or of an FCM. According to CFTC officials, the demise (due to trading losses related to natural gas derivatives) in the fall of 2006 of Amaranth Advisors, LLC (Amaranth), a $9 billion multistrategy hedge fund, had no impact on the integrity of the clearing system for CFTC-regulated futures and option contracts. The officials said that at all times Amaranth’s account at its clearing FCM was fully margined and the clearing FCM met all of its settlement obligations to its clearinghouse. They also said that the approximate $6 billion of losses suffered by Amaranth on regulated and unregulated exchanges did not affect its clearing FCM, the other customers of the clearing FCM, or the clearinghouse. CFTC investigates and, as necessary, prosecutes alleged violators of the Commodity Exchange Act (CEA) and CFTC regulations and may conduct such investigations in cooperation with federal, state, and foreign authorities. Enforcement referrals can come from several sources, including CFTC’s market surveillance group or tips. Remedies sought in enforcement actions generally include permanent injunctions, asset freezes, prohibitions on trading on CFTC-registered entities, disgorgement of ill-gotten gains, restitution to victims, revocation or suspension of registration, and civil monetary penalties. On the basis of CFTC enforcement data, from the beginning of fiscal year 2001 through May 1, 2007, CFTC brought 58 enforcement actions against CPOs and CTAs, including those affiliated with hedge funds, for various violations. A summary of the violations cited in the actions includes misrepresentation with respect to assets under management or profitability; failure to register with CFTC; failure to make required disclosures, statement, or reports; misappropriation of participants’ funds; and violation of prior prohibitions (i.e., prior civil injunction or CFTC cease and desist order). Pursuant to CFTC-delegated authority, NFA, a registered futures association under the CEA and a self-regulatory organization, oversees the activities, and conducts examinations, of registered CPOs and CTAs. As such, hedge fund advisers registered as CPOs or CTAs are subject to direct oversight in connection with their trading in futures markets. More specifically, to the extent that hedge fund operators or advisers trade futures or options on futures on behalf of hedge funds, the funds are commodity pools and the operators of, and advisers to, such funds are required to register as CPOs and CTAs, respectively, with CFTC and become members of NFA if they are not exempted from registration. Once registered, CPOs and CTAs become subject to detailed disclosure, periodic reporting and record-keeping requirements, and periodic on-site risk- based examinations. However, regardless of registration status, all CPOs and CTAs (including those affiliated with hedge funds) remain subject to CFTC’s anti-fraud and anti-manipulation authority. Our review of NFA documentation found that 29 advisers of the largest 78 U.S. hedge funds (previously mentioned) are registered with CFTC as CPOs or CTAs. In addition, 20 of the 29 also are registered with SEC as investment advisers or broker-dealers. According to NFA officials, because there is no legal definition of hedge funds, it does not require CPOs or CTAs to identify themselves as hedge fund operators or advisers. NFA, therefore, considers all CPOs and CTAs as potential hedge fund operators or advisers. According to NFA, in fiscal year 2006 NFA examined 212 CPOs, including 6 of the 29 largest hedge fund advisers registered with NFA. During the examinations, NFA staff performed tests of books and records and other auditing procedures to provide reasonable assurance that the firm was complying with NFA rules and all account balances of a certain date were properly stated and classified. Our review of four of the examinations found that 3 of the CPOs examined generally were in compliance with NFA regulations and the remaining 1 was found to have certain employees that were not properly registered with CFTC. According to examination documentation, subsequent to the examination, the hedge fund provided a satisfactory written response to NFA noting that it would soon properly register the employees. According to an NFA official, since 2003 NFA has taken 23 enforcement actions against CPOs and CTAs, many of which involved hedge funds. Some of the violations found included filing fraudulent financial statements with NFA, not providing timely financial statements to investors, failure to register with CFTC as a CPO, failure to maintain required books and records, use of misleading promotional materials, and failure to supervise staff. The penalties included barring CPOs and CTAs from NFA membership temporarily or permanently or imposing monetary fines ranging from $5,000 to $45,000. Bank regulators (the Federal Reserve, OCC, and FDIC) monitor the risk management practices of their regulated institutions’ interactions with hedge funds as creditors and counterparties. They are responsible for ensuring that the organizations under their jurisdiction are complying with supervisory guidance and industry sound practices regarding prudent risk management throughout their business, including the guidance and practices applicable to their activities with hedge funds. The 1999 PWG report recommended that bank regulators encourage improvements in the risk management systems of the regulated entities and promote the development of a more risk-based approach to capital adequacy. In overseeing banks’ hedge fund-related activities, the bank regulators examine the extent to which banks are following sound practices as part of their reviews of banks’ capital market activities. Bank regulators conduct routine supervisory examinations of risk management practices relating to hedge funds and other highly leveraged counterparties to ensure that the supervised entities (1) perform appropriate due diligence in assessing the business, risk exposures, and credit standing of their counterparties; (2) establish, monitor, and enforce appropriate quantitative risk exposure limits for each of their counterparties; (3) use appropriate systems to identify, measure, and manage counterparty credit risk; and (4) deploy appropriate internal controls to ensure the integrity of their processes for managing counterparty credit risk. The Federal Reserve’s supervision of banks’ hedge fund-related activities is part of a broader, more comprehensive set of supervisory initiatives to assess whether banks’ risk management practices and financial market infrastructures are sufficiently robust to cope with stresses that could accompany deteriorating market conditions. Specifically, the Federal Reserve has been focusing on five key supervisory initiatives: (1) comprehensive reviews of firms’ corporate-level stress testing practices, (2) a multilateral supervisory assessment of the leading global banks’ current practices for managing their exposures to hedge funds, (3) a review of the risks associated with the rapid growth of leveraged lending, (4) a new assessment of practices to manage liquidity risk, and (5) continued efforts to reduce risks associated with weaknesses in the clearing and settlement of credit derivatives and other OTC derivatives. The bank regulators also have performed targeted examinations of the credit risk management practices of regulated entities that are major hedge fund creditors or counterparties. From 2004 through 2007, FRBNY conducted various reviews that addressed aspects of certain banks’ counterparty credit risk management practices that involved hedge fund activities. These reviews were motivated by the rapid growth of the hedge fund industry and also done to gauge progress made in improving risk management practices pursuant to supervisory guidance and industry recommendations. Examiners conducted meetings with management and reviewed policies and procedures primarily by performing transactional testing, relying on internal audits, and studying other functional regulators’ reviews. According to a Federal Reserve official, while global banks have significantly strengthened their risk management practices and procedures for managing risk exposures to hedge funds, further progress is needed. For example, in a 2006 firmwide examination of stress-testing practices at certain U.S. banks, FRBNY indicated a need for the banks “to enhance their capacity to aggregate credit exposures at the firm wide level, including across counterparties; to assess the potential for counterparty credit losses to be compounded by losses on the banks’ proprietary trading positions; and to assess the potential effects of a rapid and possibly a protracted decline in asset market liquidity.” According to this official, the Federal Reserve has begun a review of liquidity risk management practices at the largest U.S. bank holding companies, focusing on the firms’ efforts to ensure adequate funding in more adverse market conditions. Federal Reserve examiners made a variety of other recommendations as a result of the various reviews. Many of their recommendations were developed as ways that banks could continue to enhance their risk management processes associated with hedge fund counterparties. The examiners found a range of practices for counterparty stress testing for hedge funds and noted that there was room for improvement even at the banks with the most advanced practices. Where examiners identified deficiencies, specific recommendations were made. Although credit officers often adjusted credit terms for degree of transparency, examiners recommended that banks’ policies explicitly link transparency to credit terms and that banks monitor evolving credit terms for hedge fund counterparties. Moreover, examiners found that the banks that were part of the reviews needed to enhance their policies to more specifically address due diligence requirements or standards to provide clearer standards and guidance for reviewing hedge fund valuation processes. In 2005 and 2006, OCC conducted an examination of hedge fund-related activities—mainly counterparty credit risk management practices (such as due diligence of their hedge fund customer’s business), and margining and collateral monitoring processes—at the three large U.S. banks. OCC generally found the overall risk management practices of these banks to be satisfactory. However, examiners identified concerns in the lack of transparency in the banks’ hedge fund review processes and issued recommendations accordingly. For example, examiners found in certain banks a lack of adequate credit review policies that clearly outline risk assessment criteria for levels of leverage, risk strategies and concentrations, and other key parameters and documentation to support accuracy of a bank’s credit analysis and risk rating system. Examiners also found that financial information provided by some hedge fund borrowers has been incomplete and that banks should document the lack of such information in their credit review process. OCC noted that the banks have taken satisfactory steps in response to examination issues raised. In addition, in 2005 and 2006, FDIC conducted an examination of hedge fund lending at one of its banks. FDIC noted that the bank was not in compliance with the bank’s lending policy to diversify its hedge fund loans and that certain policies should be updated, but generally found the risk management practices of the bank’s hedge fund lending program to be satisfactory. Bank regulators largely rely on their oversight of hedge fund-related activities at those regulated entities that transact with hedge funds in their efforts to mitigate the potential for hedge funds to contribute to systemic risk. Since 2004, regulators have increased their attention to these activities. In particular, bank regulators are reviewing the entities’ ability to identify and manage their counterparty credit risk exposures, including those that involve hedge funds. Regulated entities have the responsibility to practice prudent risk management standards, but prudent standards do not guarantee prudent practices. As such, it will be important for regulators to show continued vigilance in overseeing banks’ hedge fund- related activities. Investors, creditors, and counterparties impose market discipline—by rewarding well-managed hedge funds and reducing their exposure to risky, poorly managed hedge funds—during due diligence exercises and through ongoing monitoring. During due diligence, hedge funds should be asked to provide credible information about risks and prospective returns. Market participants told us that growing investments by institutional investors with fiduciary responsibilities and guidance from regulators and industry groups led hedge fund advisers to improve disclosure and transparency in recent years. Creditors and counterparties also can impose market discipline through ongoing management of credit terms (such as collateral requirements). However, some market participants and regulators identified limitations to market discipline or failures to exercise it properly. For instance, large hedge funds use multiple prime brokers, making it unlikely that any single broker would have all the data needed to assess a client’s total leverage. Others were concerned that some creditors and counterparties may lack the capacity to assess risk exposures because of the complex financial instruments and investment strategies that some hedge funds use, which could illustrate a failure to exercise market discipline properly if the creditor or counterparty continued to do business with the fund. Further, regulators have raised concerns that creditors may have relaxed credit standards to attract and retain hedge fund clients, another potential failure of market discipline. By evaluating hedge fund management, the fund’s business activities, and its internal controls, investors are imposing discipline on hedge fund advisers. Market participants who generally transact with large hedge funds and institutional investors told us that before investing in a hedge fund, potential investors usually conduct a due diligence exercise of the business, management, legal, and operational aspects of the hedge fund under consideration for investment. Market participants further noted that the exercise moves from an initial screening to quickly identify the funds that do meet the potential investor’s investment criteria to a detailed evaluation that involves addressing a series of questions about the business, management, legal, and operational aspects of the hedge fund. Among other things, investors may take into account investment strategies hedge funds use to produce their returns, the types of investments traded, and the fund’s risk management practices and risk profiles. Investors analyze this information to determine whether the investment’s risks and reward warrant further consideration. Typically, prospective investors receive written information from the hedge fund manager in the form of a private offering memorandum or private placement memorandum (PPM). We could not obtain hedge fund offering documents, but market participants who have reviewed PPMs told us that there are no standard disclosure requirements for PPMs and the information disclosed is often general in scope. Consequently, investors may seek information beyond that provided in PPMs and sometimes beyond what hedge funds are willing to provide. For instance, they may request from hedge fund managers a list of hedge fund securities positions and holdings (position transparency) or information about the risks associated with the hedge fund’s market positions (risk transparency). However, according to market participants we interviewed, although most hedge funds may be willing to provide information on aggregate position and holdings, many hedge funds decline to share specific position transparency, citing the need to keep such information confidential for fear that disclosure might permit other market participants to take advantage of their trading positions to the detriment of the fund and its investors. Additionally, some prospective investors also may obtain from hedge fund managers access to the hedge funds’ prime brokers and other service providers such as auditors, lawyers, fund administrators, and accountants for background checks. A representative of a group that represents institutional investors we met with told us that after making an investment, investors typically will monitor their investment on an ongoing basis to evaluate portfolio performance and track how well investments are moving toward investment goals and benchmarks. Recently, hedge fund advisers have increased their level of disclosure in response to demands from institutional investors. Institutional investments in hedge funds have grown substantially in recent years. Over the last 3 years, institutional investors in search of higher returns and risk diversification, such as pension funds, endowments, and funds of hedge funds, have accounted for a significant portion of the inflows to hedge funds assets under management. (See app. II for information on pension plan investments in hedge funds). According to market participants and industry literature, the increasing popularity of hedge funds among these institutional investors has led to changes in the industry. That is, hedge fund advisers have responded to the requirements of these clients by providing disclosure that allows them to meet fiduciary responsibilities. For example, one market participant we met with stated that a trustee to a pension plan that is subject to the “prudent person” standard of the Employee Retirement Income Security Act of 1974 (ERISA) is required to make investment decisions for the plan in accordance with a “prudent person” standard of care that may require plan trustees to demand greater quality oversight of their capital; in consequence, they may demand greater transparency, risk information, and valuation techniques than individual investors. Market participants with whom we met also told us that the trend toward permanent capital also has been driving hedge fund transparency. Markets participants further noted that as hedge funds reach a certain size, they tend to seek more permanent capital through the public markets to avoid the liquidity risks inherent with sudden investor redemptions. The ability of market discipline to control hedge funds’ risk taking is limited by some investors’ inability to fully understand and evaluate the information they receive on hedge fund activities or these investors’ willingness to hire others to evaluate that information for them. An example can be found in the Amaranth case. According to market participants we interviewed and industry coverage that documented the event, Amaranth noted in its periodic letters to investors that it had a large concentration in the natural gas sector. The market participants and the documents noted that some investors became concerned about the potential risks associated with concentrated positions and withdrew their money from Amaranth several months before Amaranth failed. They also said that other investors did not heed potential warning signs included in the investor letter and kept their money in Amaranth either in pursuit of higher investment returns or because they did not fully comprehend the changing risk profile of the hedge fund. Regulators, market participants, and academics generally agree that hedge funds have improved disclosure and risk management practices since the LTCM crisis and have largely adopted the guidance from various industry groups and the PWG. Regulators told us that from their examinations of regulated entities that transact business with hedge funds as creditors and counterparties, they have observed that hedge fund disclosure and risk management practices have improved since LTCM. For example, in response to the 1999 PWG report recommendation that hedge funds establish a set of sound practices for risk management and internal controls, private sector entities such as the Managed Funds Association (MFA), and the Counterparty Risk Management Policy Group (CRMPG), as well as the public sector International Organization of Securities Commissions (IOSCO) published guidance for hedge funds and their advisers. Market participants told us that many hedge fund advisers with which they conduct business have adopted these best practices, including risk management models that go beyond measuring “value at risk,” and now regularly stress-test portfolios under a wide range of adverse conditions. Representatives from a risk management firm told us that in the past, hedge fund advisers viewed risk management practices as proprietary. However, as the trading environment evolved, advisers realized they needed to provide results of risk assessments to investors to attract investments. By evaluating hedge fund management, the fund’s business activities, and its internal and risk management controls, creditors and counterparties exert discipline on hedge fund advisers. According to market participants, entering into contracts with hedge funds as creditors or counterparties is the primary mechanism by which financial institutions’ credit exposures to hedge funds arise, and exercising counterparty risk management is the primary mechanism by which financial institutions impose market discipline on hedge funds. According to the staff of the member agencies of the PWG, the credit risk exposures between hedge funds and their creditors and counterparties arise primarily from trading and lending relationships, including various types of derivatives and securities transactions. As part of the credit extension process, creditors and counterparties typically require hedge funds to post collateral that can be sold in the event of default. According to market participants we interviewed, collateral most often takes the form of cash or high-quality, highly liquid securities (e.g., government securities), but it can also include lower-rated securities (e.g., BBB rated bonds) and less liquid assets (e.g., CDOs). They told us they take steps to ensure that they have clear control over collateral that is pledged, which according to some creditors and counterparties we interviewed, that was not the case with LTCM. Creditors and counterparties generally require hedge funds to post collateral to cover current credit exposures (this generally occurs daily) and, with some exceptions, require additional collateral, or initial margin, to cover potential exposures that could arise if markets moved sharply. Creditors to hedge funds said that they measure a fund’s current and potential risk exposure on a daily basis to evaluate counterparty positions and collateral. To control their risk exposures, creditors and counterparties to generally large hedge funds told us that, unlike in the late 1990s, they now conduct more extensive due diligence and ongoing monitoring of a hedge fund client. According to OCC, banks also conduct “abbreviated” underwriting procedures for small hedge funds in which they do not conduct much due diligence. OCC officials also told us that losses due to the extension of credit to hedge funds were rare. Creditors and counterparties of large hedge funds use their own internal rating and credit or counterparty risk management process and may require additional collateral from hedge funds as a buffer against increased risk exposure. They said that as part of their due diligence, they typically request information that includes hedge fund managers’ background and track record; risk measures; periodic net asset valuation calculations; side pockets and side letters; fees and redemption policy; liquidity, valuations, capital measures, and net changes to capital; and annual audited statements. According to industry and regulatory officials familiar with the LTCM episode, this was not necessarily the case in the 1990s. At that time, creditors and counterparties had not asked enough questions about the risks that were being taken to generate the high returns. Creditors and counterparties told us they currently establish credit terms partly based on the scope and depth of information that hedge funds are willing to provide, the willingness of the fund managers to answer questions during on-site visits, and the assessment of the hedge fund’s risk exposure and capacity to manage risk. If approved, the hedge fund receives a credit rating and a line of credit. Several prime brokers told us that losses from hedge fund clients were extremely rare due to the asset-based lending they provided such funds. Also, one prime broker noted that during the course of its monitoring the risk profile of a hedge fund client, it noticed that the hedge fund manager was taking what the broker considered to be excessive risk, and requested additional information on the fund’s activity. The client did not comply with the prime broker’s request for additional information, and the prime broker terminated the relationship with the client. Through continuous monitoring of counterparty credit exposure to hedge funds, creditors and counterparties can further impose market discipline on hedge fund advisers. Some creditors and counterparties also told us that they measure counterparty credit exposure on an ongoing basis through a credit system that is updated each day to determine current and potential exposures. Credit officers at one bank said that they receive monthly investor summaries from many of their hedge fund clients. The summaries provide information for monitoring the activities and performance of hedge funds. Officials at another bank told us that they generally monitor their hedge fund clients on a quarterly basis and may alter credit terms or terminate a relationship if it is determined that the fund is not dealing with risk adequately or if it does not disclose requested information. Some creditors also said that they may provide better credit terms to hedge funds that consolidate all trade executions and settlements at their firm than to hedge funds that use several prime brokers because they would know more about the fund’s exposure. However, large hedge funds may limit the information they provide to banks and prime brokers for various reasons. Unlike small hedge funds that generally depend on a single prime broker for a large number of services ranging from capital introductions to the generation of customized accounting reports, many large hedge funds are less dependent on the services of any single prime broker and, according to several market participants, use multiple prime brokers as a means to protect proprietary trading positions and strategies, and to diversify their credit and operational risks. Despite improvements in disclosure and counterparty credit risk management, regulators noted that the effectiveness of market discipline may be limited or market discipline may not be exercised properly for several reasons. First, because large hedge funds use several prime brokers as creditors and counterparties, no single prime broker may be able to assess the total amount of leverage used by a large hedge fund client. The stress tests and other tools that prime brokers use to monitor a given counterparty’s risk profile can incorporate only those positions known to a trading partner. Second, the increasing complexity of structured financial instruments has raised concerns that counterparties lack the capacity (in terms of risk models and resources) to keep pace with and assess actual risk, illustrating a possible failure to exercise market discipline properly. More specifically, despite improvements in risk modeling and risk management, the Federal Reserve believes that further progress is needed in the procedures global banks use to manage exposures to highly leveraged counterparties such as hedge funds, in part because of the increasing complexity of products such as structured credit products and CDOs in which hedge funds are active participants. The complexity of structured credit products can add to the already complex task of measuring and managing counterparty credit risk. For example, another Federal Reserve official has noted that the measurement of counterparty credit risk requires complex computer simulations and that “the management of counterparty risk is also complicated further by hedge funds’ complicated organizational structures, legal rights, collateral arrangements, and frequent trading. It is important that banks develop the systems capability to regularly gather and analyze data across diverse internal systems to manage their counterparty credit risk to hedge funds.” One regulatory official further noted the challenges faced by institutions in finding, developing and retaining individuals with the expertise required to analyze the adequacy of these increasingly complex models. The lack of talented staff can affect counterparty credit risk monitoring and the ability to impose market discipline on hedge fund risk taking activities. Third, some regulators have expressed concerns that some creditors and counterparties may have relaxed their counterparty credit risk management practices for hedge funds, which could weaken the effectiveness of market discipline as a tool to limit the exposure of hedge fund managers. They noted that competition for hedge fund clients may have led some to reduce the initial margin in collateral agreements, reducing the amount of collateral to cover potential credit exposure. Financial regulators and industry observers remain concerned about the adequacy of counterparty credit risk management at major financial institutions because it is a key factor in controlling the potential for hedge funds to become a source of systemic risk. While hedge funds generally add liquidity to many markets, including distressed asset markets, in some circumstances hedge funds’ activities can strain liquidity and contribute to financial distress. In response to their concerns regarding the adequacy of counterparty credit risk, a group of regulators have, over the past year, been collaborating to examine particular hedge fund-related activities across entities they regulate, mainly through international multilateral efforts and the domestic PWG. The PWG also has established two private sector committees to identify best practices to address systemic risk and investor protection issues and has formalized protocols to respond to financial shocks. Financial regulators believe that the market discipline imposed by investors, creditors, and counterparties is the most effective mechanism for limiting the systemic risk from the activities of hedge funds (and other private pools of capital). The most important providers of market discipline are the large, global commercial and investment banks that are hedge funds’ principal creditors and counterparties. While regulators and others recognize that counterparty credit risk management has improved since LTCM, the ability of financial institutions to maintain the adequacy of these management processes in light of the dramatic growth in hedge fund activities remains a particular focus of concern. In its July 2005 report, CRMPG noted that “credit risk and, in particular counterparty credit risk, is probably the single most important variable in determining whether and with what speed financial disturbances become financial shocks with potential systemic traits.” CRMPG further noted that no single hedge fund today is leveraged on a scale comparable to that of LTCM in 1998 and that the risk management capabilities of hedge funds had improved. Although CRMPG concluded that the chance of systemic financial shocks had declined, Treasury officials noted that regulators continually review whether the failure of one or more large market participants, including hedge funds, could destabilize regulated financial institutions or financial markets in a way that generates broader macroeconomic consequences. Effective market discipline requires that the creditors and counterparties to hedge funds obtain sufficient information to reliably assess clients’ risk profiles and that they have systems to monitor and limit exposures to levels commensurate with each client’s risk and creditworthiness. A number of large commercial banks and prime brokers bear and manage the credit and counterparty risks that hedge fund leverage creates. According to a Federal Reserve official, the recent growth of hedge funds poses formidable challenges, including significant risk management challenges to these market participants. If market participants prove unwilling or unable to meet these challenges, losses in the hedge fund sector could pose significant risk to financial stability. Concerns remain that creditors and counterparties face constant challenges in measuring and managing counterparty credit risk exposures to hedge funds, and in maintaining qualified staff to implement the various elements of counterparty credit risk management, including stress testing. In addition to counterparty credit risk, Treasury officials noted that regulators continually review the liquidity of markets to determine whether the trading behavior of market participants, including hedge funds, could serve as a source of systemic risk. While hedge funds often provide liquidity to stressed markets by buying securities that are temporarily distressed, herding behavior by market participants, including hedge funds, could strain available market liquidity. According to a Treasury official, “If numerous market participants establish large positions on the same side of a trade, especially in combination with a high degree of leverage, this concentration can contribute to a liquidity crisis if market conditions compel traders to simultaneously unwind their positions.” Some market participants noted that the consequences of these “crowded” trades were difficult to anticipate. Some Federal Reserve officials noted in a journal article that “in a crisis, interlocking credit exposures would be the key mechanism by which risks would be transmitted from one institution to another, potentially transforming a run-of-the-mill disturbance into a systematic situation.” The forced sale of assets is recognized by regulators as a potential transmission mechanism for systemic risk. According to these officials, regulators in general share concerns that “in illiquid markets, hedge funds may be forced to sell positions to meet margin requirements, driving down market prices. In severe cases, the hedge fund may drive down the value of existing positions by more than they receive from the original sale, forcing further sales.” However, this transmission mechanism is not unique to hedge funds but is a characteristic of leverage. Even when the failure of a hedge fund does not result in a large-scale liquidation of assets, the concerns raised by the failure can disrupt credit markets. For instance, concerns regarding the valuation of illiquid subprime mortgages, such as those held by Bear Stearns Asset Management’s hedge funds, have contributed to questions about credit quality in this and other markets, and this broader questioning of credit quality may have contributed to the subsequent tightening of credit. To enhance market discipline and help mitigate the potential systemic risks that hedge fund activities could pose, financial regulators recently have increased collaboration with each other, foreign financial regulators, and industry participants. They have been conducting these efforts primarily through an international review of large financial institutions and actions initiated by the PWG. As discussed earlier, hedge funds are a potential source of systemic risk if the capacity of their creditors and counterparties to value positions and manage risk does not keep pace with developments such as the increasing complexity of financial instruments and of investment strategies. Because the use of these instruments and strategies is not exclusive to hedge funds, a regulator said that collecting data on hedge fund activities to monitor buildup of this risk would be difficult and not meaningful. Instead, regulators have taken a risk-focused and principles-based approach by monitoring counterparty risk management practices across regulated entities and issuing guidance to help strengthen market discipline. Currently, regulators are reviewing issues related to the valuation of complex, illiquid, and stressed instruments by all types of entities. The PWG has also formalized protocols for coordination among the financial regulators in the event of a financial market crisis. In late 2006, FRBNY, SEC, OCC, FSA, and bank regulators of Germany and Switzerland—collectively, the “multilateral effort”—jointly conducted a review of the largest commercial and investment banks that transacted business with hedge funds as counterparties and creditors. The agencies met with nine major U.S. and European bank and securities firms to discuss risk management policies and procedures related to interactions with hedge funds through prime brokerage, direct lending, and over-the- counter derivative transactions. According to one U.S. regulator, the reviewers found that the current and potential credit exposures of these banks to hedge funds were small relative to the banks’ capital because of their extensive use of collateral agreements. However, the reviewers identified a number of issues related to the management of exposures to hedge funds and the measurement of potential exposures in adverse market conditions. The regulators participating in this effort have been addressing these issues by gathering additional data or information to help regulators learn more about the condition and quality of the firms’ risk management practices. The regulators are conducting an ongoing follow- up review, which entails more detailed work by the principal regulator of each firm. In February 2007, the PWG issued principles-based guidance for approaching issues related to private pools of capital, including hedge funds. The principles are intended to guide market participants (for example, hedge fund advisers, creditors, counterparties, and investors), as well as U.S. financial regulators as they address investor protection and systemic risk issues associated with the rapid growth of private pools of capital and the complexity of financial instruments and investment strategies they employ. The efforts for each group of stakeholders enumerated in the principles and guidelines that the PWG issued entitled “Agreement Among PWG and U.S. Agency Principals on Principles and Guidelines Regarding Private Pools of Capital” are briefly summarized below: “Private Pools of Capital: maintain and enhance information, valuation, and risk management systems to provide market participants with accurate, sufficient, and timely information. Investors: consider the suitability of investments in a private pool in light of investment objectives, risk tolerances, and the principle of portfolio diversification. Counterparties and Creditors: commit sufficient resources to maintain and enhance risk management practices. Regulators and Supervisors: work together to communicate and use authority to ensure that supervisory expectations regarding counterparty risk management practices and market integrity are met.” The PWG’s principles and guidelines are intended to enhance market discipline, which the PWG stated most effectively addresses systemic risk posed by private pools of capital, without deterring the benefits such pools of capital provide to the U.S. economy. According to a Treasury official involved in developing the PWG guidance, the PWG believes that self- interested, more sophisticated, informed investors, creditors, and counterparties have their own economic incentives to take actions to reduce and manage their own risks, which will reduce systemic risk overall and enhance investor protection. Also, the PWG continues to believe that regulators have an important role to play in addressing these issues. Further, in September 2007, the PWG established two private sector committees. One committee comprised asset managers, and the other comprised investors, including labor organizations, endowments, foundations, corporate and public pension funds, investment consultants, and other U.S. and non-U.S. investors. The first task of these committees will be to develop best practices using the PWG’s principles-based guidance released in February 2007 as a foundation to enhance investor protection and systemic risk safeguards. According to the mission statement of the asset managers’ committee, best practices will cover asset advisers having information, valuation, and risk management systems that meet sound industry practices. In turn, these systems would enable them to provide accurate information to creditors, counterparties, and investors with appropriate frequency, breadth, and detail. According to the mission statement of the investors’ committee, best practices would cover information, due diligence, risk management, and reporting and build on the PWG guidelines related to disclosure, due diligence, risk management capabilities, the suitability of the strategies of private pools given an investor’s risk tolerance, and fiduciary duties. According to staff of the PWG member agencies, the PWG expects both committees to have drafts of the best practices available for public comment early in 2008 and to issue final products in the spring. Finally, recognizing that financial shocks are inevitable, the PWG told us that it adopted more formalized protocols in fall 2006 to coordinate communications among the appropriate regulatory bodies in the event of market turmoil, including a liquidity crisis. The protocols include a detailed list of contact information for domestic and international regulatory bodies, financial institutions, risk managers, and traders, and procedures for communications. According to staff of the PWG member agencies, the protocols were used to handle recent events such as the fallout from the Amaranth losses in 2006 and the losses from subprime mortgage investments by two Bear Stearns hedge funds in summer 2007. Addressing potential systemic risk posed by hedge fund activities involves actions by investors, creditors and counterparties, hedge fund advisers, and regulators. The regulators and the PWG’s recent initiatives are intended to bring together these various groups to improve current practices related to hedge fund-related activities and to better prepare for a potential financial crisis. We view these initiatives as positive steps taken to address systemic risk. However, it is too soon to evaluate their effectiveness. We provided a draft of this report to CFTC, DOL, Federal Reserve, FDIC, OCC, OTS, SEC, and Treasury for their review and comment. None of the agencies provided written comments. All except for FDIC and OTS provided technical comments, which we have incorporated into the report as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this report. At that time, we will send copies of this report to the Ranking Member of the Committee on Financial Services, House of Representatives; the Chairman and Ranking Member of the Committee on Banking, Housing, and Urban Affairs, U.S. Senate; Ranking Member of the Subcommittee on Capital Markets, Insurance and Government Sponsored Enterprises, House of Representatives; and other interested congressional committees. We are also sending copies to the Chairman, Board of Governors of the Federal Reserve System; Chairman, Commodity Futures Trading Commission; Chairman, Federal Deposit Insurance Corporation; Secretary of Labor; Comptroller of the Currency, Office of the Comptroller of the Currency; Director, Office of Thrift Supervision; Chairman, Securities and Exchange Commission; Secretary of the Treasury; and other interested parties. We will make copies available to others upon request. The report will also be available at no charge on our Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-8678 or williamso@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To address the first objective (regulatory oversight of hedge fund-related activities), we reviewed regulatory examination documents (for example, examination modules, scoping, examination reports and findings, corrective actions taken or proposed by firms, and regulatory follow-ups). We selected for review some of the recent examinations—conducted by the Office of the Comptroller of the Currency (OCC), Federal Reserve Bank of New York (FRBNY), Federal Deposit Insurance Corporation (FDIC), Securities and Exchange Commission (SEC), and National Futures Association (NFA)—of regulated entities engaged in transactions with hedge funds as creditors or counterparties. We reviewed examinations of financial institutions that are creditors or counterparties to hedge funds conducted from fiscal years 2004 through 2006 and other supervisory materials. We reviewed 3 OCC examinations, 7 FRBNY examinations, 1 FDIC examination, 14 (9 for hedge fund advisers and 5 for Consolidated Supervised Entities) SEC examinations, and 4 NFA examinations. We reviewed information that the federal financial regulators provided on enforcement cases brought for hedge fund-related activities. In addition, we interviewed U.S. federal financial regulatory officials to gain an understanding of how they oversee hedge fund-related activities at the financial institutions over which they have regulatory authority. More specifically, we spoke with officials from the banking regulators—OCC, Board of Governors of the Federal Reserve System, FRBNY, FDIC, and Office of Thrift Supervision; a securities regulator— SEC; and commodities regulators—Commodity Futures Trading Commission and NFA. We interviewed officials representing Department of Treasury (Treasury), the United Kingdom’s Financial Services Authority, and the President’s Working Group (PWG) as well. To determine which of the Institutional Investor’s Alpha Magazine 2007 Annual Hedge Fund 100 listing of global hedge fund advisers were U.S.- based and registered with SEC as a hedge fund investment adviser or with CFTC as a commodity pool operator (CPO) or commodity trading advisor (CTA), we asked the compliance staff at SEC and NFA to compare their registrants’ listing with the largest 100 listing. Representatives from both organizations said that they made their best attempt to match the names in the largest 100 listing with the registrants’ listings, which was difficult because the names were not always identical in both listings. SEC estimates that of the 78 of the largest 100 hedge fund advisers identified by Alpha Magazine as U.S.-based, 49 were registered with SEC as investment advisers. NFA estimates that 29 of the 78 U.S.- based hedge fund advisers were registered with CFTC as CPOs or CTAs. We also reviewed prior GAO reports. To address the second objective (market discipline), we interviewed relevant market participants (such as investors, creditors, and counterparties), and regulatory officials, to get their opinions on (1) how market participants impose market discipline on hedge funds’ risk taking and leveraging (and whether they have improved since 1998); (2) the type and frequency of information such participants would need from hedge fund advisers to gauge funds’ risk profiles and internal controls to make informed initial and ongoing investment decisions; and (3) the extent to which hedge fund disclosures to market participants have improved since the 1998 near failure of the large hedge fund, Long-Term Capital Management. We also interviewed large hedge funds and the Managed Funds Association—a membership organization representing the hedge fund industry. In addition, we conducted a literature search to identify research on hedge funds and reviewed a selection of relevant regulatory and industry studies, speeches, and testimonies on the matter. To address the third objective (systemic risk), we reviewed relevant speeches, testimonies, studies, principles and guidelines that the PWG issued about private pools of capital in 2007 entitled “Agreement Among PWG and U.S. Agency Principals on Principles and Guidelines Regarding Private Pools of Capital,” regulatory examination documents and relevant industry best practices for investors, hedge fund advisers, creditors, and counterparties. We also reviewed PWG protocols (“PWG Crisis Management Protocols”) for dealing with a financial market crisis. And we interviewed officials representing U.S. federal financial regulators, Treasury, and the PWG to get their views on systemic risk issues. To address pension plan investments in hedge funds discussed in appendix II, we reviewed and analyzed annual survey data from 2001 through 2006 from Pensions & Investments. Also, we reviewed Greenwich Associates data from 2004 through 2006 that focused on pensions’ hedge fund investments. We conducted data reliability assessments on the data from Pensions & Investments and Greenwich Associates that we used, and determined that the data were sufficiently reliable for our purposes. We also reviewed provisions of the Pension Protection Act of 2006 (PPA) that changed requirements for how hedge funds hold pension plan assets. We interviewed pension industry officials (such as pension plan sponsors of public and private funds, trade groups, pension consultants, pension plan and hedge fund database providers, a hedge fund law firm, and hedge funds), an academic and regulatory officials from the Department of Labor, SEC, and Treasury to get their opinions on the matter, including trends in such investments over the last few years and the impact of PPA on pension plan hedge fund investments. We also reviewed other relevant documents. We conducted this performance audit from September 2006 to January 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix presents summary information about the potential impact that pension law reform may have on the ability of hedge funds to attract pension plan investments and statistics on the extent of pension plan investments in hedge funds in recent years. Section 611(f) of the Pension Protection Act of 2006 (PPA) amended the Employee Retirement Income Security Act (ERISA) to, among other things, provide a statutory definition for “plan assets,” which essentially codified, with some modification, the Department of Labor’s (DOL)—the primary regulator of pension plans—existing plan asset regulation (sometimes referred to as the 25 percent benefit plan investor test). By modifying the 25 percent benefit plan investor test, the PPA amendment has the effect of permitting hedge funds to accept unlimited investments from certain “non-ERISA benefit plans” (governmental plans, foreign plans, and most church plans) while still accepting investments from plans that are subject to ERISA (ERISA benefit plans) without becoming subject to ERISA’s fiduciary duty requirements. What constitutes “plan assets” is significant because a person who exercises discretionary authority or control over the assets of an ERISA benefit plan or who provides investment advice for a fee with respect to plan assets is a “fiduciary” subject to the fiduciary responsibility provisions of ERISA. As ERISA did not provide a definition for “plan assets” prior to the enactment of PPA, DOL, in 1986, adopted Rule 2510.3-101 to describe the circumstances under which the assets of an entity in which an ERISA benefit plan invests (for example, a hedge fund) would be deemed to include “plan assets” so that the manager of the entity (for example, a hedge fund manager) would be subject to the fiduciary responsibility rules of ERISA. Rule 2510.3-101 excludes from the definition of plan assets, the assets of an entity in which there is no significant aggregate investment by “benefit plan investors,” which is defined to include both ERISA and non- ERISA benefit plans. Participation in an entity would be significant if 25 percent or more of the value of any class of equity securities of the entity were held by the benefit plan investors collectively (i.e., the 25 percent benefit plan investor rule). By now excluding from the 25 percent calculation those equity securities held by non-ERISA benefit plans, the allowable proportionate share of investments by ERISA benefit plans has increased. We asked several large hedge funds as well as some regulators whether hedge fund advisers were actively soliciting investments from pension plans due to the reform. They were unable to comment on whether hedge fund advisers were taking steps to attract these institutional investments. However, according to one regulator and two large hedge funds, some hedge fund advisers do not seek pension investments, and others do seek out pension investments but are careful not to reach the 25 percent threshold that would require hedge fund advisers to assume fiduciary responsibilities. According to one regulator and an industry source, pension plans are attracted to various hedge fund investment strategies, depending on their portfolio composition. They also suggested that pension plans tend to invest in hedge funds through funds of hedge funds. From 2001 through 2006, investments by defined benefit (DB) plans in hedge funds increased, but the share of total pension plan assets invested in hedge funds remained small. Two key reasons pension plans invest in hedge funds are to diversify their investment risks and increase investment returns. Much of the recent growth (and expected continued growth) in hedge fund investments is attributable to investments by institutions such as pension funds, endowments, insurance companies, and foundations. Two recent surveys of DB plan sponsors describe the prevalence of hedge fund investments. According to a Greenwich Associates survey of pensions plans with $250 million or more in assets, the share of private and public DB plans (not including union plans) invested in hedge funds was 27 percent and 24 percent, respectively, in 2006. Among DB plans with $250 million to $500 million in assets, 16 percent were invested in hedge funds. About 29 percent of DB plans with $1 billion or more in assets were invested in hedge funds. The number of DB plans investing in hedge funds has increased over time. According to a survey of the largest pension plans by Pensions & Investments, the share of DB plans reporting investments in hedge funds increased from 11 percent in 2001 to 36 percent in 2006. Evidence from surveys of DB plans shows that between about 1 to 2 percent of total assets were invested in hedge funds. Among only those plans that invested in hedge funds, average allocations to hedge funds ranged from about 3 percent to 7 percent of a plan’s portfolio. A very small number of pension plans reported substantially larger allocations to hedge funds. Two of the 48 largest pension plans that reported investments in hedge funds in the Pensions & Investments survey had allocations of about 30 percent (Missouri State Employees’ Retirement System and Pennsylvania State Employees’ Retirement System—both of these plans primarily invest in hedge funds through funds of funds). See table 1. Survey data indicate that most pension plans invested in hedge funds do so, at least partially, through funds of hedge funds. According to the Pensions & Investments’ survey, 35 of the largest 48 DB plans that reported investments in hedge funds used funds of hedge funds for at least some of their hedge fund investments. Overall, funds of hedge funds represented 54 percent of total hedge fund investments for this group. Compared with pension plans, endowments and foundations were much more likely to invest in hedge funds. Greenwich Associates’ survey found that 75 percent of endowments and foundations (with at least $250 million in assets) were invested in hedge funds in 2006. These investments amounted to slightly more than 12 percent of total assets for all endowments and foundations in their sample. According to Pensions & Investments, hedge fund investments reported among the largest pension plans increased from about $3.2 billion in 2001 to about $50.5 billion in 2006, approximately a 1,500 percent increase (see fig. 1). Furthermore, for those DB plans that reported hedge fund investments in the 2006 Pensions & Investments survey, the investments represented about 3 percent of their total DB assets under management. Hedge funds seek absolute rather than relative return—that is, look to make a positive return whether the overall (stock or bond) market is up or down—in a variety of market environments and use various investment styles and strategies, and invest in a wide variety of financial instruments, some of which follow: Convertible arbitrage: Typically attempt to extract value by purchasing convertible securities while hedging the equity, credit, and interest rate exposures with short positions of the equity of the issuing firm and other appropriate fixed-income related derivatives. Dedicated shorts: Specialize in short-selling securities that are perceived to be overpriced—typically equities. Emerging market: Specialize in trading the securities of developing economies. Equity market neutral: Typically trade long-short portfolios of equities with little directional exposure to the stock market. Event driven: Specialize in trading corporate events, such as merger transactions or corporate restructuring. Fixed income arbitrage: Typically trade long-short portfolios of bonds. Macro: Take bets on directional movements in stocks, bonds, foreign exchange rates, and commodity prices. Long/short equity: Typically exposed to a long-short portfolio of equities with a long bias. Managed futures: Specialize in futures trading—typically employing trend following strategies. In addition to the contacts named above, Karen Tremba (Assistant Director), M’Baye Diagne, Sharon Hermes, Joe Hunter, Marc Molino, Akiko Ohnuma, Robert Pollard, Carl Ramirez, Omyra Ramsingh, Barbara Roesmann, and Ryan Siegel made major contributions to this report.
Since the 1998 near collapse of Long-Term Capital Management (LTCM), a large hedge fund--a pooled investment vehicle that is privately managed and often engages in active trading of various types of securities and commodity futures and options--the number of hedge funds has grown, and they have attracted investments from institutional investors such as pension plans. Hedge funds generally are recognized as important sources of liquidity and as holders and managers of risks in the capital markets. Although the market impacts of recent hedge fund near collapses were less severe than that of LTCM, they recalled concerns about risks associated with hedge funds and they highlighted the continuing relevance of questions raised over LTCM. This report (1) describes how federal financial regulators oversee hedge fund-related activities under their existing authorities; (2) examines what measures investors, creditors, and counterparties have taken to impose market discipline on hedge funds; and (3) explores the potential for systemic risk from hedge fund-related activities and describes actions regulators have taken to address this risk. In conducting this study, GAO reviewed regulators' policy documents and examinations and industry reports and interviewed regulatory and industry officials, and academics. Regulators only provided technical comments on a draft of this report, which GAO has incorporated into the report as appropriate. Under the existing regulatory structure, the Securities and Exchange Commission and Commodity Futures Trading Commission can provide direct oversight of registered hedge fund advisers, and along with federal bank regulators, they monitor hedge fund-related activities conducted at their regulated entities. Since LTCM's near collapse, regulators generally have increased reviews--by such means as targeted examinations--of systems and policies of their regulated entities to mitigate counterparty credit risks, including those involving hedge funds. Although some examinations found that banks generally have strengthened practices for managing risk exposures to hedge funds, regulators recommended that they enhance firmwide risk management systems and practices, including expanded stress testing. Regulated entities have the responsibility to practice prudent risk management standards, but prudent standards do not guarantee prudent practices. As such, it will be important for regulators to show continued vigilance in overseeing hedge fund-related activities. According to market participants, hedge fund advisers have improved disclosures and transparency about their operations since LTCM as a result of industry guidance issued and pressure from investors and creditors and counterparties (such as prime brokers). But market participants also suggested that not all investors have the capacity to analyze the information they receive from hedge funds. Regulators and market participants said that creditors and counterparties have generally conducted more due diligence and tightened their credit standards for hedge funds. However, several factors may limit the effectiveness of market discipline or illustrate failures to properly exercise it. For example, because most large hedge funds use multiple prime brokers as service providers, no one broker may have all the data necessary to assess the total leverage of a hedge fund client. Further, if the risk controls of creditors and counterparties are inadequate, their actions may not prevent hedge funds from taking excessive risk. These factors can contribute to conditions that create systemic risk if breakdowns in market discipline and risk controls are sufficiently severe that losses by hedge funds in turn cause significant losses at key intermediaries or in financial markets. Financial regulators and industry participants remain concerned about the adequacy of counterparty credit risk management at major financial institutions because it is a key factor in controlling the potential for hedge funds to become a source of systemic risk. Regulators have used risk-focused and principles-based approaches to better understand the potential for systemic risk and respond more effectively to financial shocks that threaten to affect the financial system. For instance, regulators have collaborated to examine some hedge fund activities across regulated entities. The President's Working Group has taken steps such as issuing guidance and forming two private sector groups to develop best practices to enhance market discipline. GAO views these as positive steps, but it is too soon to evaluate their effectiveness.
Medicare is the nation’s largest health insurance program, covering about 39 million elderly and disabled beneficiaries at a cost of more than $193 billion. Between 1990 and 1997, Medicare experienced spending increases averaging 9.8 percent per year to make it one of the fastest growing parts of the federal budget. This growth has slowed somewhat in the past 2 years. The Congressional Budget Office projects that Medicare’s share of gross domestic product will rise almost one-third by 2009. This substantial growth in Medicare spending will continue to be fueled by demographic and technological changes. Medicare’s rolls are expanding and are projected to increase rapidly with the retirement of the baby boom generation. For example, today’s elderly make up about 13 percent of the total population; by 2030, they will comprise 20 percent as the baby boom generation ages. Individuals aged 85 and older make up the fastest growing group of beneficiaries. So, in addition to the increased demand for health care services due to sheer numbers, the greater prevalence of chronic health conditions associated with aging will further boost utilization. medical insurance,” or part B, which covers physician and outpatient hospital services, diagnostic tests, ambulance services, and other services and supplies. A BBA provision that shifted the financing of some home health services from part A to part B helped extend the HI trust fund’s solvency. Other BBA reforms, designed to slow program spending, address both Medicare’s managed care and fee-for-service components. Medicare’s managed care program covers the growing number of beneficiaries who have chosen to enroll in prepaid health plans, where a single monthly payment is made for all necessary covered services. About 6.8 million people—about 17 percent of all Medicare beneficiaries—were enrolled in more than 450 managed care plans as of December 1, 1998. Most of Medicare’s beneficiaries, however, receive health care on a fee-for-service basis, whereby providers are reimbursed for each covered service they deliver to beneficiaries. One way in which the BBA seeks to restructure Medicare is by encouraging greater managed care participation. Under the Medicare+Choice program, a broader range of health plans, such as preferred provider organizations and provider-sponsored organizations, are permitted to participate in Medicare. BBA’s emphasis on Medicare+Choice reflects the perspective that increased managed care enrollment will help slow Medicare spending while expanding beneficiaries’ health plan options. Our recent work has examined two aspects of the Medicare+Choice program—payments and consumer information initiatives. BBA provisions dealing with payments to Medicare+Choice plans acknowledge that Medicare’s prior managed care payment method for health maintenance organizations (HMO) and other risk plans failed to save the government money and created wide disparities in payment rates across counties. The BBA establishes a new rate-setting methodology for 1998 and future years, incorporating adjustment rates for the health and expected service use of managed care enrollees to avoid overpayment. It also guarantees health plans a minimum payment level to encourage them to locate in areas that previously had lower rates and few, if any, Medicare participating health plans. Other provisions addressing consumer information needs are designed to raise beneficiary participation in Medicare+Choice and promote more effective quality-based competition among plans. Context for BBA’s rate-setting provisions: BBA modifications to Medicare’s health plan payment method acknowledge the problem of flawed capitation rates that, historically, have been paid to HMOs. Our work has demonstrated that these rates have produced billions of dollars in aggregate excess payments and inappropriate payment disparities across counties. The fundamental problem we found was that HMO payment rates were based on health care spending for the average nonenrolled beneficiary, while the plans’ enrollees tended to be healthier than average nonenrollees, a phenomenon known as favorable selection. Some analysts expected excess payments to diminish with increased enrollment. Instead, the excess continued to grow, since rates were based on the rising concentrations of higher-cost beneficiaries remaining in fee-for-service. Risk adjustment is a tool for setting capitation rates so that they reflect enrollees’ expected health costs as accurately as possible. This tool is particularly important given Medicare’s growing use of managed care and the potential for favorable selection, which, if not taken into account, generates excess payments. Medicare’s current risk adjuster—based only on demographic factors such as age and sex—cannot sufficiently lower rates to be consistent with the expected costs of managed care’s healthier population. For example, a senior who was relatively healthy and another who suffered from a chronic condition—even if they were of the same age and sex—would have very different expected health care needs; but the current risk adjuster does not take those differences into account. To correct this problem, the BBA requires HCFA to devise a new risk adjuster that incorporates patient health status factors. HCFA had to develop and report on the new risk adjuster by March 1 of this year and is required to put the method in place by January 2000. Design, implementation, and impact issues: HCFA’s proposed interim risk adjuster—to be implemented in 2000—relies exclusively on hospital inpatient data to measure health status. While not perfect, the proposed risk adjuster does link the rates paid more closely to projections of Medicare enrollees’ medical costs. Ideally, the risk adjuster would measure health status with complete and reliable data from other settings, such as physicians’ offices, but these data are not currently available. Given the reliance on only hospital data, HCFA has taken steps to avoid rewarding plans that hospitalize patients unnecessarily or, conversely, penalizing efficient plans that provide care in less costly settings. A “next generation” of risk adjustment based on the services beneficiaries receive in all settings is scheduled for 2004. HCFA plans to phase in the use of the interim risk adjuster and, in so doing, will avoid sharp payment changes that could adversely affect beneficiaries and plans. Such changes could be detrimental to beneficiaries if plans, in response, substantially scaled back their benefit packages or reconsidered their commitment to the Medicare+Choice program. Currently, there is concern about a recent surge in plan drop-outs from Medicare+Choice. As of January 1999, 99 of the capitated plans in operation during 1998 had withdrawn or reduced their Medicare service areas. Industry representatives have stated that plans may have dropped out partially in anticipation of reduced payments, which could result when the interim risk adjuster is implemented. Plans have also cited the administrative burden associated with some of the new Medicare+Choice regulations as a significant reason for their withdrawal decisions. while some plans are dropping out of the program, others are interested in signing new contracts. In fact, 16 applications for new or expanded service areas have recently been approved and 44 more are pending. Context for BBA’s information campaign provisions: Capitalizing on changes in the delivery of health care, BBA’s introduction of new health plan options is intended to create a market in which different types of health plans compete to enroll and serve Medicare beneficiaries. The BBA reflects the idea that consumer information is an essential component of a competitive market. From the beneficiary’s viewpoint, information on available plans needs to be accurate, comparable, accessible, and user-friendly. Informed choices are particularly important as the BBA phases out the beneficiary’s opportunity to disenroll from a plan on a monthly basis and moves toward the private sector practice of annual reconsideration of plan choice. The BBA mandated that, as part of a national information campaign, HCFA undertake several activities that could help beneficiaries make enrollment decisions regarding Medicare+Choice. Each October, prior to a mandated annual, coordinated enrollment period, HCFA must distribute to beneficiaries an array of general information on, among other things, enrollment procedures, rights, and the potential for Medicare+Choice contract termination by a participating plan. The BBA also required HCFA to provide beneficiaries with a list of available participating plans and a comparison of these plans’ benefits. The agency must also maintain a toll-free telephone number and an Internet site as general sources of information about plan options, including traditional fee-for-service Medicare. Design, implementation, and impact issues: The BBA-mandated information campaign is a first-time and massive undertaking for HCFA. The effort is well under way, but relative to the ideal—a market in which informed consumers prod competitors to offer the best value—many challenges lie ahead. comparisons difficult for beneficiaries. Standardized language on benefit and coverage definitions would facilitate (1) HCFA’s oversight functions to ensure accurate information, (2) plans’ compliance with reporting requirements, and (3) beneficiary decisionmaking. HCFA intends to require plans to begin using a standardized format for some information in anticipation of the November 1999 enrollment period. HCFA is also in the process of making summary data available through several sources. In 1998, as part of a five-state pilot project, HCFA provided beneficiaries with a handbook containing comparative information on the Medicare+Choice plans available in their area and access to a toll-free telephone line. It also established an Internet site with similar information about plans available nationwide. These efforts made important strides, but because of plan pull-outs late in the year, some of the information beneficiaries received was inaccurate. Critical now is a thorough evaluation of these efforts to ensure that the information provided is clear, sufficient, and helpful to beneficiaries’ decisionmaking. Assessing how to make these efforts cost-effective—that is, targeting the right amounts and types of information to different groups of beneficiaries—is also of vital importance. The BBA also makes fundamental changes to Medicare’s fee-for-service component, which represents about 87 percent of program outlays and covers about 33 million beneficiaries. Mandated PPSs will alter how reimbursements are made to SNFs, HHAs, hospital outpatient departments, and rehabilitation facilities. Instead of generally paying whatever costs providers incur, HCFA’s mandate is to establish rates that give providers incentives to deliver care and services more efficiently. Our work on SNF and home health benefits shows the importance of the design and implementation details of PPSs to achieving expected BBA savings and ensuring that Medicare beneficiaries have access to appropriate services. administrative overhead. Payments for ancillary services, such as physical, occupational, or speech therapy, however, were virtually unlimited. These unchecked ancillary service payments have been a major contributor to significant increases in daily reimbursements to SNFs. Because providing more of these services generally triggered higher payments, facilities had no incentive to deliver services efficiently or only when necessary. The BBA called for phasing in a PPS for SNF care beginning after July 1, 1998, to bring program spending under control. Design, implementation, and impact issues: Under the PPS, SNFs receive a payment for each day of care provided to a Medicare beneficiary. The payment, called a per diem rate, is based on the average daily cost of providing all Medicare-covered SNF services, as reflected in facilities’ 1995 costs. Since not all patients require the same amount of care, the per diem rate is “case-mix” adjusted to take into account the nature of each patient’s condition and expected care needs. Facilities that can care for beneficiaries for less than this case-mix-adjusted per diem amount will benefit financially, whereas SNFs with costs higher than the adjusted per diem rate will be at risk for the difference between their costs and the payments. The SNF PPS is expected to control Medicare spending because the per diem rate covers all services, so SNFs have an incentive to provide services efficiently and judiciously. Moreover, since payments vary with patient needs, the PPS is intended to ensure access to these services. We are concerned, however, that the design of the case-mix adjuster preserves the opportunity for providers to increase their compensation by supplying potentially unnecessary services. As stated, the SNF PPS divides beneficiaries into case-mix groups to reflect differences in patient needs that affect the cost of care. Each group is intended to define clinically similar patients who are expected to incur similar costs. An adjustment is associated with each group to account for these cost differences. A facility then receives a daily payment that is the same for each patient within a group. Since the payments do not vary with the actual costs incurred, a SNF has an incentive to reduce the costs of caring for the patients in each case-mix group. level of services required for placement in a particular group. This reduces the average cost for the SNF’s patients in that case-mix group but does not reduce the Medicare payments for these patients. Thus, expected Medicare savings may not be achieved. We are also concerned that the data underlying the SNF rates overstate the reasonable costs of providing services and may not appropriately reflect costs for patients with different care needs. Most of the cost data used to set the SNF rates were not audited. Of particular concern are therapy costs, which are likely inflated because there have been few limits on these payments. Even if additional audits were to uncover significant inappropriate costs, HCFA maintains that it has no authority to adjust the base rates after the implementation of the new system. Furthermore, the case-mix adjusters are based on cost information on about 4,000 patients. This sample may simply be too small to reliably estimate these adjusters, particularly given the substantial variation in treatment patterns among SNFs. As a result, the case-mix-adjusted rates may not vary appropriately to account for the services facilities are expected to provide—rates will be too high for some types of patients and too low for others. Under the SNF PPS, whether a SNF patient is deemed eligible for Medicare coverage and how much will be paid are based on a facility’s assessment of its patients and its judgment. Monitoring these assessments and determinations is key to realizing expected savings from the system. Texas, which implemented a similar reimbursement system for Medicaid, conducts on-site reviews to monitor the accuracy of patient assessments and finds a continuing error rate of about 20 percent. HCFA has no plans to undertake as extensive a monitoring effort. However, without adequate vigilance, inaccurate, inappropriate, and even fraudulent assessments could compromise the benefits of the PPS. Context for home health provisions: Medicare spending for home health care rose even more rapidly than spending for SNF services—at an average annual rate of 27.9 percent between 1990 and 1996. Several factors accounted for this spending growth, particularly relaxed coverage requirements that, over time, have made home health care available to more beneficiaries, for less acute conditions, and for longer periods of time. Essentially, Medicare’s home health benefit gradually has been transformed from one that focused on patients needing short-term care after hospitalization to one that serves chronic, long-term-care patients as well. To control spending while ensuring the appropriate provision of services, the BBA mandated important changes in the payment method and provider requirements for home health services. HCFA is required to establish a PPS for HHAs by fiscal year 2001. Designing an appropriate system for HHAs will be particularly challenging because of certain characteristics of the benefit. Home health care is a broad benefit that covers a wide variety of patients, many of whom have multiple health conditions; and the standards for care are not well defined. Consequently, the case-mix adjuster and payment rates must account for substantial variation in the number, type, and duration of visits. Further, the wide geographic variation in the use of home health care makes it difficult to determine appropriate treatment patterns that must be accounted for in the overall level of payment. A final concern has to do with the quality and adequacy of services. Since the services are delivered in beneficiaries’ homes, oversight is particularly critical when payment changes are implemented to constrain program outlays. Recognizing the difficulty of developing and implementing a PPS, the BBA required HCFA to pay HHAs under an interim system. The interim system builds on payment limits already in place by making them more stringent and by providing incentives for HHAs to control the number and mix of visits to each beneficiary. Design, implementation, and impact issues: Under the interim payment system, which took effect October 1, 1997, HHAs are paid their costs subject to the lower of two limits. The first limit builds on the existing aggregate per-visit cost limits but makes them more stringent. The second limit caps total annual Medicare payments on the basis of the number of beneficiaries served and an annual per-beneficiary amount. The annual per-beneficiary amount is based on agency-specific and regional average, per-beneficiary payments, and the limit aims to control the number of services provided to users. The blending of agency-specific and regional amounts is intended to account for the significant differences in service use across agencies and geographic areas. historic growth in the home health industry has been such that there were still over 9,000 HHAs—more than there were in October 1995—to provide services to Medicare beneficiaries. Further, half of the closures were in just four states—California, Louisiana, Oklahoma, and Texas—three of which had experienced agency growth well above the national average. The closures could be a market correction for overexpansion in light of the BBA’s signal that Medicare would not support the double-digit increases in spending of the previous few years. The closures alone are not a measure of any impact on access for Medicare beneficiaries to home health services—which is the predominant concern. Since home health agencies require little physical capital, other agencies may be able to quickly absorb the staff and patients of closing agencies. We have attempted to monitor the impact of the interim payment system on access for this Committee as well as for the House Committees on Commerce and Ways and Means. Last fall, we reported that interviews with hospital discharge planners and local aging organization representatives in seven states with high numbers of closures had not indicated a change over the past year in the willingness or ability of HHAs in their areas to serve Medicare beneficiaries. We are continuing this work, expanding the number of areas examined. Recently available claims information will allow us to extend this monitoring further—pinpointing areas where there has been a decline or leveling off of home health utilization. We will provide the Committee a report next month and another this summer on our ongoing work to assess access to home health care. The brief experience with some of the major Medicare provisions of the BBA demonstrates the challenges to implementing meaningful reform. HCFA has fallen behind in instituting some changes and has had difficulty implementing others because of constrained resources, lack of experience, or inadequate data. At the same time, various provider groups have increasingly come to the Congress for relief. We believe that any significant alterations to key BBA provisions should be based on thorough analysis or sufficient experience to fully understand their effects. Mr. Chairman, this concludes my statement. I will be happy to answer any questions you or the Committee Members may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the implementation and impact of the Medicare provisions in the Balanced Budget Act of 1997 (BBA), focusing on: (1) the Medicare Choice program, particularly the payment method and consumer information efforts; and (2) prospective payment systems (PPS) for skilled nursing facilities (SNF) and home health agencies (HHA) in Medicare's traditional fee-for-service program. GAO noted that: (1) changes of the magnitude of those in the BBA require significant efforts to implement well and are subject to continual scrutiny; (2) GAO recently reported that the efforts of the Health Care Financing Administration (HCFA) to put the BBA provisions in place have been extensive and noteworthy, and the agency has made substantial progress in implementing the majority of the Medicare-related BBA mandates; (3) at the same time, it has encountered obstacles; (4) intense pressure to resolve year 2000 computer compliance issues has slowed HCFA's efforts; (5) in undertaking certain major initiatives, the agency has had to cope with inadequate experience and insufficient information; (6) thus, achieving the objectives of the BBA will require HCFA to refine and build on its initial efforts; (7) reforms of the payment methods for Medicare Choice plans are under way; (8) the withdrawal of some managed care plans has raised questions about how to maintain desired access for beneficiaries while implementing needed changes to plan payments and participation requirements; (9) HCFA has also initiated an information campaign to provide beneficiaries with new tools to make informed health plan choices and create stronger, quality-based competition; (10) some aspects of the campaign have only been piloted and certain problems did develop; refining these efforts to make them more useful and effective for beneficiaries is now critical; (11) the BBA's mandate to replace cost-based reimbursement methods with PPS constitutes another major program reform; (12) the phase-in of the PPS for SNFs began on schedule on July 1, 1998; (13) design flaws and inadequate underlying data used to establish the payment rates may compromise the system's ability to meet the twin objectives of slowing spending growth while promoting appropriate beneficiary care; (14) GAO has not found evidence that the closures or the interim payment system has significantly affected beneficiary access to home health care; (15) GAO's monitoring of potential access problems is continuing as more data on any effects of the interim system become available; (16) the impact of BBA's significant transformations of Medicare could generate pressure to undo many of the act's provisions; (17) in this environment, Congress will face difficult decisions that could pit particular interests against a more global interest in preserving Medicare for the long term; and (18) GAO believes that it would be a mistake to significantly modify BBA's provisions without thorough analysis or giving them a fair trial over a reasonable period of time.
The AIM-9 family of air-to-air missiles has protected U.S. fighter aircraft for over 40 years, but now there are more modern foreign missiles that may present a threat to U.S. aircraft. The U.S. Navy and Air Force considered buying a foreign missile but determined that the best solution to meet U.S. requirements was to extensively upgrade the current AIM-9M missile. The services selected Hughes Missile Systems Company (now the Raytheon Corporation) to develop and produce a very maneuverable missile that, together with a new helmet-mounted cueing system, is expected to be the best in the world. The AIM-9 Sidewinder family of air-to-air missiles is carried on all tactical fighter aircraft and is used at short ranges when target aircraft are too close for radar-guided missiles to be effective. The Sidewinder was first deployed in the 1950s—as the AIM-9B. Over the years, improvements were made as new models were introduced. The missiles have been sold to many friendly countries. The current missile, the AIM-9M, evolved in 1978. U.S. fighter aircraft equipped with the AIM-9M missile, however, are facing modern foreign-built missiles and advanced cueing/targeting systems. The rules of engagement for U.S. pilots require that, in many situations, they make a positive identification before firing on an adversary. This results in the pilot’s not being able to fire until the target aircraft is well within visual range. At combat speeds such an encounter can quickly evolve into a close-in fight, during which a short-range missile is required. A joint Navy and Air Force study predicts that a significant percentage of air-to-air encounters will result in a close-in flight. In April 1996, the Air Force Chief of Staff testified that U.S. pilots have the fourth best short-range missile in the world. Modern short-range missile systems with their cueing/targeting systems can engage targets throughout the forward hemisphere of the aircraft, providing a decisive advantage in a close-in fight. The services are trying to develop tactics and countermeasures to neutralize these threats, but there is general agreement that a more capable U.S. short-range missile system is needed as soon as possible. In the 1970s, the United States and several European countries signed a Memorandum of Agreement that specified that the Europeans would develop a new short-range missile to replace the AIM-9 Sidewinder series. That missile became the Advanced Short-Range Air-to-Air Missile (ASRAAM). In the late 1980s, however, the European consortium dissolved. When the consortium dissolved, the Navy and the Air Force reexamined U.S. requirements and determined that the ASRAAM did not have the capability they required. The United States subsequently left the ASRAAM program. The two services then worked on separate upgrades to the AIM-9M. After false starts with their separate programs, a joint Navy and Air Force program with the Navy as lead service was started to extensively upgrade the AIM-9M. The upgraded missile is the AIM-9X. As a part of the alternative evaluation process before starting the AIM-9X program, the services considered acquiring one of the modern foreign missiles such as the Russian AA-11, the Israeli PYTHON 4, or the British ASRAAM as an alternative to developing a new U.S. missile. DOD determined, however, that none of these missiles was able to meet all of the U.S. requirements. The services conducted an evaluation of the ASRAAM, including a 6-month Foreign Comparison Test that included firing the missile from a U.S. F-16 aircraft. The ASRAAM is electrically and physically compatible with U.S. aircraft and uses the same infrared sensor as the AIM-9X. The evaluation, however, showed that ASRAAM does not meet all of the U. S. performance requirements. Also, the evaluation showed that, because of the additional time and cost that would be needed to upgrade, test, and integrate ASRAAM for U.S. aircraft, it offered no advantage over the proposed AIM-9X missile. During the 2-year AIM-9X concept development phase, the services analyzed user needs, current and future threats, and available technology to determine the requirements for the new missile. The resulting AIM-9X system requirement has five key performance parameters: the ability to operate during the day or at night; the ability to operate over land and at sea in the presence of infrared countermeasures; weight, size, and electrical compatibility with all current U.S. fighters and the F-22; the ability to acquire, track, and fire on targets over a wider area than the AIM-9M; and a high probability that a missile launched will reach and kill its target. The analyses showed that user requirements could be met and that technical risk could be reduced, by modifying the existing AIM-9M and developing a new targeting/cueing system. The AIM-9X missile is planned to have increased resistance to countermeasures and improved target acquisition capability over the AIM-9M. It will have a new infrared seeker, a tracker to interpret what the seeker sees, a streamlined missile body, and rocket motor thrust vectoring for improved maneuvering. It will be carried on all U.S. fighter aircraft, including the F/A-18, F-15, F-16, and F-22. An 18-month AIM-9X competitive demonstration and validation program began in 1994 with the Hughes Missile Systems Company and the Raytheon Corporation as the competing contractors. Both companies demonstrated, among other things, how they would reduce the technical risk of developing the AIM-9X missile. Examples of demonstration and validation work include trade studies, simulating missile performance, analyzing missile compatibility with Navy and Air Force aircraft, and flight testing target-tracking capability. Additionally, the contractors were required to plan for manufacturing the missile, including identifying new or unique processes and special tooling and facilities requirements. Hughes was selected as the AIM-9X missile contractor in December 1996. Hughes has total performance responsibility, including development, production, and maintenance support for the missile. Engineering and manufacturing development began in January 1997 and is planned to end in 2001. The services plan to buy a total of 10,000 missiles at an average unit cost of $264,000 (then-year dollars). The AIM-9X missile is shown in figure 1.1. A separate, parallel program is developing a helmet-mounted cueing system that would allow U.S. pilots to aim the AIM-9X missile seeker toward a target aircraft by turning their heads and looking at the target. The pilot can then fire the missile without having to turn the aircraft toward the target, increasing the probability of killing a hostile aircraft before it can launch a missile. Another effort is developing the necessary hardware and software modifications to integrate the missile and helmet into the aircraft. All three elements of the AIM-9X weapon system—the missile, helmet, and aircraft modifications—are seen as critical to countering the capabilities of modern threat missiles. The Chairman, Subcommittee on Military Research and Development, House Committee on National Security, requested that we provide an independent assessment of the AIM-9X program’s status. Our objectives were to determine (1) the services’ efforts to reduce missile development risk, (2) the missile program’s plan to transition from development to production, and (3) the importance of separately managed but essential supporting systems. To evaluate the missile’s development risk, we visited the program office and the contractor where we discussed technology and schedule risk. We reviewed the program acquisition and test plans. We visited the Naval Air Weapons Center at China Lake, California, where we discussed the missile program’s technology and schedule with the government short-range missile experts. We reviewed reports prepared by the contractors during the program demonstration and validation phase. We also reviewed several studies of foreign missiles, including the Senior Review Team analysis of the ASRAAM program. To assess the missile program’s plan to transition from development to production, we examined the planned development and operational test schedules and production plans. We considered the amount and type of testing that is planned to be accomplished before the first and subsequent production decisions. We discussed test plans and potential risks with program, contractor, and DOD officials charged with managing and overseeing missile flight testing. We also reviewed our previous reports on other major acquisition systems with regard to readiness to enter low-rate initial production. We reviewed the helmet-mounted cueing system, a separately managed but essential supporting system, to determine its importance to the AIM-9X system. We discussed program technical issues with program managers. We also compared schedule plans for the AIM-9X missile, helmet-mounted cueing system, and associated aircraft modifications. During the course of this review, we met with representatives from the DOD Inspector General, Naval Air Systems Command, and Air Force Headquarters, Washington, D.C.; Commander in Chief, Atlantic Fleet, Norfolk, Virginia; Naval Air Weapons Center, China Lake, California; Air Combat Command, Langley Air Force Base, Virginia; Aeronautical Systems Center and National Air Intelligence Center, Wright-Patterson Air Force Base, Ohio; ASRAAM Senior Review Team, Baltimore, Maryland; and Hughes Missile Systems Company, Tucson, Arizona. We performed our audit between July 1996 and October 1997 in accordance with generally accepted government auditing standards. The AIM-9X missile development program is designed to balance the requirements for a more capable short-range missile with the users’ limited resources and the need to field the new missile as soon as possible. Key elements of the approved development plan are strategies to reduce technical risk and incentives to lower cost and ensure schedule performance. By early 1999, when the AIM-9X missile design is expected to be finalized and flight tests are underway, a more accurate assessment of the program status can be made. Technology problems are often the cause of cost growth and schedule delays in development programs. To help ensure a successful AIM-9X missile development program, the services have adopted several strategies to minimize technical risk. Among these are: using existing subsystems, components, and items not requiring conducting a competitive demonstration and validation of new technology; combining government and contractor technical expertise through integrated product teams. The AIM-9X missile will use some existing subsystems that do not require development. For example, several key components are identical to those used in the AIM-9M missile, including the warhead, rocket motor, and fuze. These components satisfy user requirements and can be obtained either from existing inventory missiles or from new production. In either case, the design and production processes for these items are tested and proven. The winning Hughes missile design also includes many nondevelopmental items. For example, Hughes will use fins, an airframe, and an engine control system previously developed and tested by the Air Force. The cryoengine, which cools the missile sensor, is a modified version of a similar device used in other systems. These components do not require lengthy development and testing but will require some modification for the AIM-9X. Hughes officials told us that over 70 percent of the missile design uses parts that do not require development. The company also estimates that 66 percent of AIM-9X missile software can be obtained from existing programs. To help anticipate, identify, and solve technical problems, the government’s technical experts in short-range missile development have been added to the Hughes AIM-9X development team as a part of the integrated product teams concept. Technical experts from the Naval Air Warfare Center at China Lake, California, and the Aeronautical Systems Center at Eglin Air Force Base, Florida, are now a part of the AIM-9X team. Under this teaming approach, the combined knowledge and efforts of both contractor and government are focused on the development process. Hughes has also implemented a comprehensive technical risk assessment system that identifies and tracks all known technical risks in the program. Each risk is described, quantified, monitored, and reported. For example, Hughes has assessed the guidance and control and thrust vectoring system as moderate to low-risk items. The company has developed management plans to address these risks. Affordability is a central objective of the AIM-9X missile program. The emphasis on cost began during the requirements definition process, continued through the demonstration and validation phase, was a factor in the selection of the development contractor, and is an integral part of the program acquisition strategy. As a DOD flagship program for the Cost as an Independent Variable Initiative—under which cost is considered more as a constraint and less as a variable—the AIM-9X program has incorporated a series of acquisition reforms to focus both government and contractor efforts to reduce and control program costs. As a program objective, AIM-9X affordability is second only to achieving the missile’s key performance characteristics. Low cost was and remains one of the users’ critical requirements for the system. During the concept development phase, an assessment of needed capabilities and anticipated cost considered the projected threat, available and emerging technologies, and projected resources. Performance and cost trade studies identified the minimum essential performance requirements and determined they could be obtained at an acceptable cost if the AIM-9M was upgraded with a new sensor and airframe instead of developing an entirely new missile. Reducing AIM-9X missile development and production cost and obtaining high confidence in the contractors’ cost estimates and cost management approach were key objectives of the 18-month demonstration phase. Under the competitive pressure of the winner-take-all development contract, the government required the contractors to establish design-to-cost goals and implementation plans, conduct affordability and producibility studies, and propose a production quantity and price structure. According to the program manager, this emphasis on cost control and cost management both reduced the expected cost of the program and increased the program office’s confidence that the contractor’s development and production cost proposal was sound and likely to be achieved. Eight initiatives were pursued during the demonstration phase to reduce program costs with only minor changes to the system’s performance requirements resulting in an estimated cost avoidance of $1.2 billion. Examples of successful reductions include relaxing computer processing time requirements (which eliminated one circuit board) and standardizing missile seeker cooling methods (which eliminated the need for two different cooling systems). The AIM-9X missile program has adopted several strategies to establish a realistic and achievable development schedule that provides the first missiles to Navy and Air Force fighter units as soon as possible. Principal among these strategies is the requirement that Hughes develop and follow a detailed integrated master plan and master schedule. The program manager told us that the government strategy for reducing schedule risk on the AIM-9X program has been to encourage the contractor to develop and follow soundly based development plans. Accordingly, both contractors were required to develop and submit integrated master plans and schedules for development and low-rate initial production during the demonstration phase. Following the successful demonstration phase, Hughes and the missile program office reexamined the proposed development schedule. On the bases of that reexamination, they agreed to reduce the development schedule from 68 to 61 months and to begin low-rate initial production a year earlier, thereby lowering development cost by $35 million. This reduction, according to the program manager, was made possible by the Hughes comprehensive development and test schedule. The AIM-9X missile development program contains a series of strategies to reduce technical risk and incentives to lower cost and ensure schedule performance. Whether program efforts to reduce technical, cost, and schedule risk will succeed will not be known for at least another year. Both program and contractor officials told us that most of the AIM-9X missile development will be completed by the spring of 1999. At that time, the AIM-9X design will be finalized, assembly of engineering development missiles underway, and development flight testing in process. The missile program manager believes any remaining development risk will be well understood at that time. In an effort to initiate AIM-9X missile production as soon as practical, the services plan to make the low-rate initial production decision in early 2000. This production decision is to be made before completing development flight tests, before adequately testing production representative missiles, and before full operational testing begins. This plan risks later discovery of problems requiring design changes and the associated cost, schedule, and performance impacts. We believe initiating low-rate initial production before developmental flight testing is complete and before there is some operational testing with production representative missiles adds unnecessary risk to the production program. The services plan to begin AIM-9X missile low-rate initial production in early 2000 by exercising the first production contract option for 150 missiles. A year later, the second production contract option for 250 missiles is to be exercised. Figure 3.1 shows the program’s planned test and production decision schedule. As figure 3.1 shows, the low-rate initial production decision for the AIM-9X missile is to be made about 1 year before completion of the planned developmental flight test program. All of the flight tests to be conducted before the missile low-rate initial production decision, including those to be conducted as part of the preliminary operational testing, will use engineering development missiles. These missiles are manufactured early in the development program and represent the contractor’s design before any significant flight testing begins. These flight tests will also use development level software and may not incorporate the helmet until the last several flights. Later in the development program, changes to the missile design are likely as the test results and manufacturing improvements are incorporated in production representative missiles. These test missiles are intended to be very close in physical configuration and performance to the AIM-9X production missile. They are to be used during the last phase of the developmental flight tests and for all of the operational flight tests. Developmental and independent operational flight testing using production representative missiles is scheduled to begin at about the same time as the low-rate initial production decision and continue for about 2 years. These tests expand upon earlier developmental testing, verify design changes incorporated in the production representative missiles, and focus on the system’s operational effectiveness and suitability. These test results, however, will not be available until after low-rate initial missile production begins, with most operational flight tests occurring after the second missile production contract is exercised. Indeed, the first low-rate initial production missiles are expected to be delivered before the operational testing is complete. The significant body of developmental and operational flight testing planned after the low-rate initial production decision point is important to realistically demonstrate and assess the AIM-9X weapon system’s ability to meet its minimum acceptable requirements for performance and suitability without major or costly design changes. Should problems be disclosed in these tests necessitating changes to the missile design, the missile cost, schedule, and performance may be adversely affected. Moreover, because the low-rate initial production missiles are to be deployed directly to operational units, such changes would directly affect operating units. We recommend that the Secretary of Defense direct the Secretaries of the Navy and the Air Force to revise the AIM-9X missile’s acquisition strategy to allow for the completion of all developmental flight tests and enough operational flight tests with production representative missiles to demonstrate that the missile can meet its minimum performance requirements before low-rate initial production begins. DOD did not concur with the recommendation, stating that adequate testing is planned prior to the low-rate initial production decision for an informed decision. The performance data to support the low-rate initial production decision will be based on incomplete testing of developmental missiles and software. Flight testing of the production representative missiles and associated systems is scheduled to begin more than a year after the planned production decision. As we have reported previously, many of the weapon systems that start production without performing operational tests to gain assurance that the systems will perform satisfactorily later experience significant operational effectiveness and/or suitability problems. All three elements of the AIM-9X weapon system—the missile, the helmet-mounted cueing system, and the associated aircraft modifications—must be present and properly working together to ensure that U.S. fighters can prevail against modern threat missiles. The services are closely coordinating the separate development programs and plan to test all of the elements together during AIM-9X flight testing. However, there is no requirement that production representative versions of the missile, helmet, and associated aircraft modifications be successfully demonstrated together before the AIM-9X missile goes into low-rate initial production. Moreover, helmets and associated aircraft modifications are not linked to the approved AIM-9X missile production and funding plans. By not requiring that the missile, helmet, and aircraft modifications be tested, produced, and deployed together, as a “system of systems,” DOD risks fielding a missile unable to prevail in aerial combat. To help them prevail in the close-in air battle, U.S. pilots are going to need not only the AIM-9X missile, but also the helmet and associated aircraft modifications. The Russians and Israelis have already developed, produced, and deployed short-range missile systems with helmet-mounted cueing systems. The Russian AA-11 missile and helmet system have been widely exported. The British, French, and other nations are also developing modern missiles. While the AIM-9X missile with the helmet is expected to be superior to all of them, the missile alone is not. Figure 4.1 illustrates the relative capabilities of the AIM-9X system of systems, the AA-11, and the AIM-9M, which is currently operational. Service officials told us that the rules for engaging enemy aircraft and the requirement for positive identification of targets increase the likelihood of close-in air battles in the future. While the AIM-9X and other missiles can be used at longer ranges, the positive identification requirement, together with the speed and agility of modern fighter aircraft, can quickly transform the fight into a close-in air battle where the advantage is held by the aircraft that can lock-on to its adversary and shoot first. As figure 4.1 shows, the AIM-9X missile without a helmet is expected to have greater lethal range than the AIM-9M and the AA-11. Without the helmet, however, a U.S. pilot would be unable to take full advantage of the AIM-9X capability to take the critical first shot that often determines the survivor in a close-in air battle. This first shot capability is achieved by the combination of the (1) helmet and the missile sensor acquiring a target well off to the side of the aircraft, as well as in front of it and (2) computer software that links the pilot’s helmet, the missile, and the aircraft fire control system. As shown in the figure, the AIM-9X system (missile, helmet, and aircraft modifications) is expected to have a distinct advantage over the AA-11 missile. In commenting on a draft of this report, DOD stated that the projected range and sensor tracking capability of AIM-9X without the helmet-mounted cueing system is equivalent to the capability of the AA-11 threat missile in azimuth and exceeds the capability of the AA-11 in range. DOD’s position is based on using the fighter aircraft radar to cue the AIM-9X missile to the target of interest when it is beyond the view of the aircraft’s heads-up display. Using the radar to cue the missile, however, will take more time and be less certain than with the helmet and will require DOD to train pilots in yet to be developed procedures and tactics that would be considerably different than current practices for aerial combat. Moreover, DOD officials we spoke with agreed that it is questionable whether DOD can meet its own positive identification requirement using the aircraft radar for cueing purposes. The AIM-9X missile, helmet, and associated aircraft modifications are being developed under separate but closely coordinated programs. The missile and helmet contractors have negotiated detailed working agreements to ensure the missile, helmet, and aircraft modifications are developed to operate together and to be fully compatible with both Navy and Air Force aircraft. While each development program will test its system independently, the missile, helmet, and aircraft modifications are also planned to be tested together as a part of AIM-9X missile flight testing. An early operational assessment of the combined system, including five flight tests, is planned prior to the AIM-9X low-rate initial production decision. Then, for the next 2 years, production representative missiles, helmets, and aircraft software are to be tested under both developmental and realistic operating conditions. While plans are in place to perform total system testing with the missile, helmet, and aircraft modifications prior to the initial AIM-9X missile low-rate initial production decision, those tests will not be done using production representative hardware and software. Moreover, there is no formal requirement that sufficient total system testing take place prior to starting missile low-rate initial production to demonstrate that the AIM-9X weapon system can meet its key performance parameters. We are concerned about this because of the criticality that all three elements work together to ensure that the AIM-9X system will prevail against modern threat missiles. If technical problems delay development of the helmet or aircraft modifications, missile testing will proceed to support the low-rate initial production decision. At that time, the ability of the AIM-9X system to achieve its performance parameters will not be known. There is an approved and funded AIM-9X production plan to acquire 10,000 missiles over 18 years beginning in 2000; however, no such production plan or approved funding exists for the helmet or for the associated aircraft modifications. We were told by the helmet program manager that each of the aircraft program offices must plan and budget for helmets and associated modifications consistent with their needs and resources. All elements of the AIM-9X weapon system must be in place to achieve the program’s objective, which is to ensure that Navy and Air Force fighters prevail in close-in aerial combat. Without a requirement that all elements of this system of systems be tested together, produced together, and deployed together, the full capability of the system will not be realized. Until the weapon system is tested and evaluated using production representative missiles and helmets, DOD decisionmakers will not have information on whether the AIM-9X weapon system’s key performance parameters are achievable. We recommend that the Secretary of Defense direct the Secretaries of the Navy and the Air Force to revise the AIM-9X missile acquisition strategy to allow for enough operational testing of the missile, helmet, and associated aircraft modifications to be accomplished, using production representative hardware and software, to demonstrate that the AIM-9X system can meet its minimum performance requirements before low-rate initial production begins. We also recommend that the Secretary of Defense direct the services to provide a coordinated production, deployment, and funding plan for all three elements of the system. On the first recommendation, DOD did not concur and stated that significant improvement over the current operational system is possible with just the AIM-9X missile only. DOD added that it would not be prudent to delay the missile development and testing to provide concurrent development and test demonstration with the helmet and aircraft modifications. On the second recommendation, DOD partially concurred and stated that it would continue to coordinate all three elements of the system but would not formally tie the three elements together. DOD expressed concern that insisting that the schedules for the missile, helmet, and aircraft modifications remain synchronized risks burdening it with higher costs if one element falls behind schedule and the other elements have to proceed at a reduced, inefficient level. The objective of the AIM-9X program has been to develop a system that will provide the capability to prevail in aerial combat against modern threat missiles. Using the missile without the helmet will not provide that capability and will require DOD to train pilots in yet to be developed procedures and tactics that would be considerably different than current practices for aerial combat. Although there are risks in continuing to synchronize the helmet and missile schedules, we believe that DOD would be accepting more risk than necessary by committing to low-rate initial production of the missiles before demonstrating, using production representative hardware and software, that the total AIM-9X system can meet its minimum performance requirements. Following are our comments on the Department of Defense’s (DOD) letter dated January 16, 1998. 1. The last AIM-9X schedule that we reviewed indicated that one test firing of a production representative missile is planned to occur within days of the low-rate initial production decision. Should these two development (vice operational) tests be accomplished as DOD now proposes, the detailed assessment of the test results will not be available to decisionmakers. 2. Figure 4.1 has been modified to indicate the potentially greater level of lethal azimuth of the AIM-9X when the missile is cued by the aircraft radar. However, that radar cueing of the missile is neither as fast nor as certain as with the helmet. Also, procedures and tactics for using the radar cueing capability with the AIM-9X would have to be developed and pilots would have to be trained. 3. Our recommendation addresses only those aircraft modifications needed to integrate the AIM-9X missile and the new helmet into each aircraft. Other aspects of the operational flight program should not be affected. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the development status of the AIM-9X missile program and its concerns about the testing and production of all elements of the AIM-9X weapon system, focusing on the: (1) services' efforts to reduce missile development risk; (2) missile program's plan to transition from development to production; and (3) importance of separately managed but essential supporting systems. GAO noted that: (1) the AIM-9X missile program includes many initiatives to reduce the risk of technical, cost, and schedule problems; (2) it uses many existing subsystems, components, and items not requiring development, and government and contractor technical experts have joined together in integrated product teams; (3) in addition, the services conducted a competitive demonstration and validation of new technologies to reduce technical risk; (4) GAO is concerned, however, about two situations; (5) the plan to start missile low-rate initial production about 1 year before completing development flight testing and before operational testing of production-representative missiles will risk later discovery of technical or operational suitability problems; (6) accordingly, at this critical juncture, Department of Defense (DOD) decisionmakers will not have enough verifiable information on the system's key performance parameters in an operational environment to make an informed production decision; (7) GAO is concerned that the helmet-mounted cueing system is being developed under a separate program from the missile even though U.S. fighter pilots need both the AIM-9X missile and the helmet-mounted cueing system to ensure that they can prevail in air-to-air combat against modern threat missiles; (8) while the separate development programs are being coordinated, there is no requirement that the missile, helmet, and aircraft modifications be thoroughly and realistically tested and evaluated together as a system of systems prior to initiating AIM-9X missile production; (9) until the weapon system is tested and evaluated using production-representative missiles and helmets, DOD decisionmakers will not have information on whether the AIM-9X weapon system's key performance parameters--such as the ability to acquire, track, and fire on targets over a wider area than the AIM-9M--are achievable; and (10) further, if all elements of the system are not produced and deployed together, the AIM-9X may not be able to prevail in aerial combat against modern threat missiles.
To best understand Medicare’s fiscal plight, we should also understand the broader health care context in which it operates. Total health care spending from all sources—public and private—continues to increase at a breathtaking pace. From 1990 through 2000, spending nearly doubled from about $696 billion to about $1.3 trillion (see fig. 1). From 2000 through 2010, the rate of spending growth is expected to accelerate somewhat, resulting in an estimated $2.7 trillion in total annual health care spending by the end of the period. Increases in medical prices account for a little more than half of the 20-year spending increase, while increases in the use of services—owing to population growth and rise in the number of services used per person—and more expensive services account for the rest. The rapid growth in health care spending means that an increasing share of the nation’s output, as measured by GDP, will be devoted to the production of health care services and goods. In 1970, spending on health care represented about 7 percent of GDP (see fig. 2). By 2010, health care spending’s share of GDP is expected to rise to about 17 percent. At the same time that health care spending has increased, consumers have become more insulated from these costs. In 1962, nearly half—46 percent—of health care spending was financed by individuals out of their own pockets (see fig. 3). The remaining 54 percent was financed by a combination of private health insurance and public programs. By 2002, the amount of health care spending financed by individuals out of their own pockets was estimated to have dropped to 14 percent. Tax considerations encourage employers to offer health insurance to their employees, as the value of the premium is excluded from the calculation of employees’ taxable earnings. Moreover, the value of the insurance coverage does not figure into the calculation of payroll taxes. These tax exclusions represent a significant source of foregone federal revenue, currently amounting to about 1 percent of GDP. Today the Medicare program faces a long-range and fundamental financing problem driven by known demographic trends and projected escalation of health care spending beyond general inflation. The lack of an immediate crisis in Medicare financing affects the nature of the challenge, but it does not eliminate the need for change. Within the next 10 years, the first baby boomers will begin to retire, putting increasing pressure on the federal budget. From the perspectives of the program, the federal budget, and the economy, Medicare in its present form is not sustainable. Acting sooner rather than later would allow changes to be phased in so that the individuals who are most likely to be affected, namely younger and future workers, will have time to adjust their retirement planning while helping to avoid related “expectation gaps.” Since there is considerable confusion about Medicare’s current financing arrangements, I would like to begin by describing the nature, timing, and extent of the financing problem. As you know, Medicare consists of two parts—HI and SMI. HI, which pays for inpatient hospital stays, skilled nursing care, hospice, and certain home health services, is financed by a payroll tax. Like Social Security, HI has always been largely a pay-as-you-go system. SMI, which pays for physician and outpatient hospital services, diagnostic tests, and certain other medical services, is financed by a combination of general revenues and beneficiary premiums. Beneficiary premiums pay for about one-fourth of SMI benefits, with the remainder financed by general revenues. These complex financing arrangements mean that current workers’ taxes primarily pay for current retirees’ benefits except for those financed by SMI premiums. As a result, the relative numbers of workers and beneficiaries have a major impact on Medicare’s financing. The ratio, however, is changing. In the future, relatively fewer workers will be available to shoulder Medicare’s financial burden. In 2002 there were 4.9 working-age persons (18 to 64 years) per elderly person, but by 2030, this ratio is projected to decline to 2.8. For the HI portion of Medicare, in 2002 there were nearly 4 covered workers per HI beneficiary. Under the Trustees’ intermediate 2003 estimates, the Medicare Trustees project that by 2030 there will be only 2.4 covered workers per HI beneficiary. (See fig. 4.) The demographic challenge facing the system has several causes. People are retiring early and living longer. As the baby boom generation ages, the share of the population age 65 and over will escalate rapidly. A falling fertility rate is the other principal factor underlying the growth in the elderly’s share of the population. In the 1960s, the fertility rate was an average of 3 children per woman. Today it is a little over 2, and by 2030 it is expected to fall to 1.95—a rate that is below replacement. The combination of the aging of the baby boom generation, increased longevity, and a lower fertility rate will drive the elderly as a share of total population from today’s 12 percent to almost 20 percent in 2030. Taken together, these trends threaten both the financial solvency and sustainability of this important program. Labor force growth will continue to decline and by 2025 is expected to be less than a third of what it is today. (See fig. 5.) Relatively fewer workers will be available to produce the goods and services that all will consume. Without a major increase in productivity, low labor force growth will lead to slower growth in the economy and slower growth of federal revenues. This in turn will only accentuate the overall pressure on the federal budget. This slowing labor force growth is not always recognized as part of the Medicare debate, but it is expected to affect the ability of the federal budget and the economy to sustain Medicare’s projected spending in the coming years. The demographic trends I have described will affect both Medicare and Social Security, but Medicare presents a much greater, more complex, and more urgent challenge. Unlike Social Security, Medicare spending growth rates reflect not only a burgeoning beneficiary population, but also the escalation of health care costs at rates well exceeding general rates of inflation. The growth of medical technology has contributed to increases in the number and quality of health care services. Moreover, the actual costs of health care consumption are not transparent. Third-party payers largely insulate covered consumers from the cost of health care decisions. These factors and others contribute to making Medicare a greater and more complex fiscal challenge than even Social Security. Current projections of future HI income and outlays illustrate the timing and severity of Medicare’s fiscal challenge. Today, the HI Trust Fund takes in more in taxes than it spends. Largely because of the known demographic trends I have described, this situation will change. Under the Trustees’ 2003 intermediate assumptions, program outlays are expected to begin to exceed program tax revenues in 2013 (see fig. 6). To finance these cash deficits, HI will need to draw on the special-issue Treasury securities acquired during the years of cash surpluses. For HI to “redeem” its securities, the government will need to obtain cash through some combination of increased taxes, spending cuts, and/or increased borrowing from the public (or, if the unified budget is in surplus, less debt reduction than would otherwise have been the case). Neither the decline in the cash surpluses nor the cash deficits will affect the payment of benefits, but the negative cash flow will place increased pressure on the federal budget to raise the resources necessary to meet the program’s ongoing costs. This pressure will only increase when Social Security also experiences negative cash flow and joins HI as a net claimant on the rest of the budget. The gap between HI income and costs shows the severity of HI’s financing problem over the longer term. This gap can also be expressed relative to taxable payroll (the HI Trust Fund’s funding base) over a 75-year period. This year, under the Trustees 2003 intermediate estimates, the 75-year actuarial deficit is projected to be 2.40 percent of taxable payroll—a significant increase from last year’s projected deficit of 2.02 percent. This means that to bring the HI Trust Fund into balance over the 75-year period, either program outlays would have to be immediately reduced by 42 percent or program income immediately increased by 71 percent, or some combination of the two. These estimates of what it would take to achieve 75-year trust fund solvency understate the extent of the problem because the program’s financial imbalance gets worse in the 76th and subsequent years. Every year that passes we drop a positive year and add a much bigger deficit year. The projected exhaustion date of the HI Trust Fund is a commonly used indicator of HI’s financial condition. Under the Trustees 2003 intermediate estimates, the HI Trust Fund is projected to exhaust its assets in 2026. This solvency indicator provides information about HI’s financial condition, but it is not an adequate measure of Medicare’s sustainability for several reasons. HI Trust Fund balances do not provide meaningful information on the government’s fiscal capacity to pay benefits when program cash inflows fall below program outlays. As I have described, the government would need to come up with cash from other sources to pay for benefits once outlays exceeded program tax income. In addition, the HI Trust Fund measure provides no information on SMI. SMI’s expenditures, which account for about 43 percent of total Medicare spending, are projected to grow even faster than those of HI in the near future. Moreover, Medicare’s complex structure and financing arrangements mean that a shift of expenditures from HI to SMI can extend the solvency of the HI Trust Fund, creating the appearance of an improvement in program’s financial condition. For example, the Balanced Budget Act of 1997 modified the home health benefit, which resulted in shifting a portion of home health spending from the HI Trust Fund to SMI. Although this shift extended HI Trust Fund solvency, it increased the draw on general revenues and beneficiary SMI premiums while generating little net savings. Ultimately, the critical question is not how much a trust fund has in assets, but whether the government as a whole and the economy can afford the promised benefits now and in the future and at what cost to other claims on scarce resources. To better monitor and communicate changes in future total program spending, new measures of Medicare’s sustainability are needed. As program changes are made, a continued need will exist for measures of program sustainability that can signal potential future fiscal imbalance. Such measures might include the percentage of program funding provided by general revenues, the percentage of total federal revenues or gross domestic product devoted to Medicare, or program spending per enrollee. As such measures are developed, questions would need to be asked about actions to be taken if projections showed that program expenditures would exceed the chosen level. Taken together, Medicare’s HI and SMI expenditures are expected to increase dramatically, rising from about 12 percent of federal revenues in 2002 to more than one-quarter by midcentury. The budgetary challenge posed by the growth in Medicare becomes even more significant in combination with the expected growth in Medicaid and Social Security spending. This growth in spending on federal entitlements for retirees will become increasingly unsustainable over the longer term, compounding an ongoing decline in budgetary flexibility. Over the past few decades, spending on mandatory programs has consumed an ever-increasing share of the federal budget. In 1962, prior to the creation of the Medicare and Medicaid programs, spending for mandatory programs plus net interest accounted for about 32 percent of total federal spending. By 2002, this share had almost doubled to approximately 63 percent of the budget. (See fig. 7.) In much of the past decade, reductions in defense spending helped accommodate the growth in these entitlement programs. Even before the events of September 11, 2001, however, this ceased to be a viable option. Indeed, spending on defense and homeland security will grow as we seek to combat new threats to our nation’s security. GAO prepares long-term budget simulations that seek to illustrate the likely fiscal consequences of the coming demographic tidal wave and rising health care costs. These simulations continue to show that to move into the future with no changes in federal retirement and health programs is to envision a very different role for the federal government. Assuming, for example, that the tax reductions enacted in 2001 do not sunset and discretionary spending keeps pace with the economy, by midcentury federal revenues may be inadequate to pay Social Security and interest on the federal debt. Spending for the current Medicare program—without any additional new benefits—is projected to account for more than one- quarter of all federal revenues. To obtain budget balance, massive spending cuts, tax increases, or some combination of the two would be necessary. (See fig.8). Neither slowing the growth of discretionary spending nor allowing the tax reductions to sunset eliminates the imbalance. In addition, while additional economic growth would help ease our burden, the projected fiscal gap is too great for us to grow our way out of the problem. Indeed, long-term budgetary flexibility is about more than Social Security and Medicare. While these programs dominate the long-term outlook, they are not the only federal programs or activities that bind the future. The federal government undertakes a wide range of programs, responsibilities, and activities that obligate it to future spending or create an expectation for spending. Our recent report describes the range and measurement of such fiscal exposures—from explicit liabilities such as environmental cleanup requirements to the more implicit obligations presented by life- cycle costs of capital acquisition or disaster assistance. Making government fit the challenges of the future will require not only dealing with the drivers—entitlements for the elderly—but also looking at the range of other federal activities. A fundamental review of what the federal government does and how it does it will be needed. At the same time, it is important to look beyond the federal budget to the economy as a whole. Figure 9 shows the total future draw on the economy represented by Medicare, Medicaid, and Social Security. Under the 2003 Trustees’ intermediate estimates and the Congressional Budget Office’s (CBO) most recent long-term Medicaid estimates, spending for these entitlement programs combined will grow to 14 percent of GDP in 2030 from today’s 8.4 percent. Taken together, Social Security, Medicare, and Medicaid represent an unsustainable burden on future generations. Although real incomes are projected to continue to rise, they are expected to grow more slowly than has historically been the case. At the same time, the demographic trends and projected rates of growth in health care spending I have described will mean rapid growth in entitlement spending. Taken together, these projections raise serious questions about the capacity of the relatively smaller number of future workers to absorb the rapidly escalating costs of these programs. As HI trust fund assets are redeemed to pay Medicare benefits and SMI expenditures continue to grow, the program will constitute a claim on real resources in the future. As a result, taking action now to increase the future pool of resources is important. To echo Federal Reserve Chairman Alan Greenspan, the crucial issue of saving in our economy relates to our ability to build an adequate capital stock to produce enough goods and services in the future to accommodate both retirees and workers in the future. The most direct way the federal government can raise national saving is by increasing government saving, that is, as the economy returns to a higher growth path, a balanced fiscal policy that recognizes our long- term challenges can help provide a strong foundation for economic growth and can enhance our future budgetary flexibility. It is my hope that we will think about the unprecedented challenge facing future generations in our aging society. Putting Medicare on a sustainable path for the future would help fulfill this generation’s stewardship responsibility to succeeding generations. It would also help to preserve some capacity for future generations to make their own choices for what role they want the federal government to play. As with Social Security, both sustainability and solvency considerations drive us to address Medicare’s fiscal challenges sooner rather than later. HI Trust Fund exhaustion may be more than 20 years away, but the squeeze on the federal budget will begin as the baby boom generation begins to retire. This will begin as early as 2008, when the leading edge of the baby boom generation becomes eligible for early retirement. CBO’s current 10-year budget and economic outlook reflects this. CBO projects that economic growth will slow from an average of 3.2 percent a year from 2005 through 2008 to 2.7 percent from 2009 through 2013 reflecting slower labor force growth. At the same time, annual rates of growth in entitlement spending will begin to rise. Annual growth in Social Security outlays is projected to accelerate from 5.2 percent in 2007 to 6.6 percent in 2013. Annual growth in Medicare enrollees is expected to accelerate from 1.1 percent today to 2.9 percent in 2013. Acting sooner rather than later is essential to ease future fiscal pressures and also provide a more reasonable planning horizon for future retirees. We are now at a critical juncture. In less than a decade, the profound demographic shift that is a certainty will have begun. Despite a common awareness of Medicare’s current and future fiscal plight, pressure has been building to address recognized gaps in Medicare coverage, especially the lack of a prescription drug benefit and protection against financially devastating medical costs. Filling these gaps could add massive expenses to an already fiscally overburdened program. Under the Trustees 2003 intermediate assumptions, the present value of HI’s actuarial deficit is $6.2 trillion. This difficult situation argues for tackling the greatest needs first and for making any benefit additions part of a larger structural reform effort. The Medicare benefit package, largely designed in 1965, provides virtually no outpatient drug coverage. Beneficiaries may fill this coverage gap in various ways. All beneficiaries have the option to purchase supplemental policies—Medigap—when they first become eligible for Medicare at age 65. Those policies that include drug coverage tend to be expensive and provide only limited benefits. Some beneficiaries have access to coverage through employer-sponsored policies or private health plans that contract to serve Medicare beneficiaries. In recent years, coverage through these sources has become more expensive and less widely available. Beneficiaries whose income falls below certain thresholds may qualify for Medicaid or other public programs. According to one survey, in the fall of 1999, more than one-third of beneficiaries reported that they lacked drug coverage altogether. Medicare also does not limit beneficiaries’ cost-sharing liability. The average beneficiary who obtained services had a total liability for Medicare-covered services of $1,700, consisting of $1,154 in Medicare copayments and deductibles in addition to the $546 in annual part B premiums in 1999, the most recent year for which data are available on the distribution of these costs. The burden can, however, be much higher for beneficiaries with extensive health care needs. In 1999, about 1 million beneficiaries were liable for more than $5,000, and about 260,000 were liable for more than $10,000 for covered services. In contrast, employer- sponsored health plans for active workers typically limited maximum annual out-of-pocket costs for covered services to less than $2,000 per year for single coverage. Modernizing Medicare’s benefit package will require balancing competing concerns about program sustainability, federal obligations, and the hardship faced by some beneficiaries. In particular, the addition of a benefit that has the potential to be extremely expensive—such as prescription drug coverage—should be focused on meeting the needs deemed to be of the highest priority. This would entail targeting financial help to beneficiaries most in need—those with catastrophic drug costs or low incomes—and, to the extent possible, avoiding the substitution of public for private insurance coverage. As I continue to maintain, acting prudently means making any benefit expansions in the context of overall program reforms that are designed to make the program more sustainable over the long term instead of worsening the program’s financial future. One reform to help improve Medicare’s financial future would be to modify Medicare’s cost-sharing rules and provide beneficiaries with better incentives to use care appropriately. Health insurers today commonly design cost-sharing requirements—in the form of deductibles, coinsurance, and copayments—to ensure that enrollees are aware that there is a cost associated with the provision of services and to use them prudently. Ideally, cost-sharing should encourage beneficiaries to evaluate the need for discretionary care but not discourage necessary care. Coinsurance or copayments would be required generally for services considered to be discretionary and potentially overused and would aim to steer patients to lower cost or better treatment options. Care must be taken, however, to avoid setting cost-sharing requirements so high as to create financial barriers to care. Medicare fee-for-service cost-sharing rules diverge from these common insurance industry practices in important ways. For example, Medicare imposes a relatively high deductible of $840 for hospital admissions, which are rarely optional. In contrast, Medicare has not increased the part B deductible since 1991. For the last 12 years, the deductible has remained constant at $100 and has thus steadily declined as a proportion of beneficiaries’ real incomes. Adjusted for inflation, the deductible has fallen to $74.39 in 1991 dollars. In recent years, leading proposals have been made to restructure Medicare that have included greater reliance on private health plans and reforms to the traditional fee-for-service program. The weaknesses identified in these two components of the current program suggest several lessons regarding such restructuring. Experience with Medicare’s private health plan alternative, called Medicare+Choice, suggests that details matter if competition is to produce enhanced benefits for enrollees and savings for the program. In addition, the traditional program must not be left unattended because it will be an important part of Medicare for years to come. The strategies needed to address either structural component must incorporate sufficient incentives to achieve efficiency, adequate transparency to reveal the cost of health care, and appropriate accountability mechanisms to ensure that the promised care and level of quality are actually delivered. If the inclusion of private health plans is to produce savings for Medicare, private incentives and public goals must be properly aligned. This means designing a program that will encourage beneficiaries to select health plan options most likely to generate program savings. This is not the case in the current Medicare+Choice program. For example, incentives for health plan efficiency exist, but any efficiency gains achieved do not produce Medicare savings. Payments to private health plans that participate in Medicare+Choice are not set through a competitive process. Instead, plans receive a fixed payment from Medicare as prescribed by statute and in return must provide all Medicare-covered services with the exception of hospice. Efficient health plans are better able to afford to provide extra benefits, such as outpatient prescription drug benefits; charge a lower monthly premium; or both and may do so to attract beneficiaries and increase market share. Until recently, however, these efficiency and market share gains were advantageous to beneficiaries and health plans but generated no savings for Medicare. Even today, the opportunity for the program to realize savings from competition among Medicare+Choice health plans remains extremely limited. This experience has shown that savings are not automatic from simply enrolling beneficiaries in private health plans. The Medicare+Choice experience offers another lesson about private plans and program savings. That is, as we recommended in 1998, payments to health plans must be adequately risk-adjusted for the expected health care costs of the beneficiaries they enroll. Otherwise, the government can inadequately compensate health plans that enroll less healthy beneficiaries with higher expected health care costs or will overpay health plans that enroll relatively healthy beneficiaries with low expected health care costs. Moreover, health plans will have an incentive to avoid enrolling less healthy beneficiaries with higher expected health care costs. In 2000, we reported that the failure to adequately adjust Medicare’s payments to private health plans for beneficiaries’ expected health care costs unnecessarily increased Medicare spending by $3.2 billion in 1998. A third lesson is that the use of private plans to serve Medicare beneficiaries may not be feasible in all locations nationwide. In Medicare+Choice, it has been difficult and expensive to encourage private health plans to serve rural areas. Payment rates have been substantially raised in rural areas since 1997, yet by 2003 nearly 40 percent of beneficiaries living in rural areas lack access to a private health plan; in contrast, 15 percent of beneficiaries in urban areas lack access to a plan. Finally, the Medicare+Choice experience underscores the importance of beneficiaries having user-friendly, accurate information to compare their health plan options and of holding private health plans appropriately accountable for the services they have promised to deliver. Leading Medicare reform proposals have included traditional Medicare as a component in their design. Traditional Medicare is likely to have a significant role for years to come, as any fundamental structural reforms would take considerable time before plan and beneficiary participation becomes extensive. Therefore, addressing flaws in the traditional program should be part of any plan to steer Medicare away from insolvency and improve its sustainability for future generations. The experience of other health insurers’ use of cost-containment strategies, including some incentives for beneficiaries to make value-based choices, suggests a strategy for modernizing the program’s design. In the current program, the lack of insurance-type protections and difficulty in setting payment rates keep Medicare from achieving greater efficiencies and thus from improving its balance sheet. Coverage through Medigap—policies that meet federally established standards and are sold by private insurers—helps to fill in some of Medicare’s gaps, but Medigap plans also have shortcomings. As required by law, Medigap plans must conform to 1 of 10 standard benefit packages, which vary in levels of coverage. Medigap offers beneficiaries stop-loss protections that are lacking in traditional Medicare, but these policies diminish important program protections by covering required deductibles and coinsurance. The most popular Medigap plans are fundamentally different from employer-sponsored health insurance policies for retirees in that they do not require individuals to pay deductibles, coinsurance, and copayments. Such cost-sharing requirements are intended to make beneficiaries aware of the costs associated with the use of services and encourage them to use these services prudently. In contrast, Medigap’s first-dollar coverage—the elimination of deductibles or coinsurance associated with the use of covered services—undermines this objective. Although such coverage reduces financial barriers to health care, it diminishes beneficiaries’ sensitivity to costs and likely increases beneficiaries’ use of services, adding to total Medicare spending. Traditional Medicare needs the tools that other insurers use to achieve better value for the protection provided. Instead of working at cross- purposes to the traditional program, Medigap should be better coordinated with it. Insurance-type reforms to Medicare and Medigap—namely, the preservation of cost-sharing requirements in conjunction with stop-loss provisions—could help improve beneficiaries’ sensitivity to the cost of care while better protecting them against financially devastating medical costs. Medicare too often pays overly generous rates for certain services and products, preventing the program from achieving a desirable degree of efficiency. For example, for certain services, our work has shown substantially higher Medicare payments relative to providers’ costs—35 percent higher for home health care in the first six months of 2001 and 19 percent higher for skilled nursing facility care in 2000. Similarly, Medicare has overpaid for various medical products. Last year, we reported that, in 2000, Medicare paid over $1 billion more than other purchasers for certain outpatient drugs that the program covers. Earlier findings that have since been addressed by the Congress following our recommendations showed Medicare paying over $500 million more than another public payer for home oxygen equipment. Excessive payments hurt not only the taxpayers but also the program’s beneficiaries or their supplemental insurers, as beneficiaries are liable for copayments equal to 20 percent of Medicare’s approved fee. For certain outpatient drugs, Medicare’s payments to providers were so high that the beneficiaries’ copayments exceeded the price at which providers could buy the drugs. In 2001, we recommended that, for covered outpatient prescription drugs, Medicare establish payment levels more closely related to actual market transaction costs, using information available to other public programs that pay at lower rates. Over the past two decades, at the Congress’ direction, Medicare has implemented a series of payment reforms designed to promote the efficient delivery of services and control program spending. Some reforms required establishing set fees for individual services; others required paying a fixed amount for a bundle of services. The payment methods introduced during this time were designed to include—in addition to incentives for efficiencies—a means to calibrate payments to ensure beneficiary access and fairness to providers. A major challenge in administering these methods—whether based on fee schedules or prospective payment systems using bundled payments— involves adjusting the payments to better account for differences in patients’ needs and providers’ local markets to ensure that the program is paying appropriately and adequately. Payment rates that are too low can impair beneficiary access to services and products, while rates that are too high add unnecessary financial burdens to the program. As a practical matter, Medicare is often precluded from using market forces—that is, competition—to determine appropriate rates. In many cases, Medicare’s size and potential to distort market prices makes it necessary to use means other than competition to set a price on services and products. Most of Medicare’s rate-setting methods are based on formulas that use historical data on providers’ costs and charges. Too often, these data are not recent or comprehensive enough to measure the costs incurred by efficient providers. At the same time, data reflecting beneficiaries’ access to services are also lacking. When providers contend that payments are not adequate, typically information is not readily available to provide the analytical support needed to determine whether these claims are valid. I have noted in the past the essential need to monitor the impact of program policy changes so that distinguishing between desirable and undesirable consequences can be done systematically and in a timely manner. To that end, I have also noted the importance of investing adequate resources in the agency that runs Medicare to ensure that the capacity exists to carry out these policy-monitoring activities. Under some circumstances, competition may be feasible and practical for setting more appropriate rates. Medicare has pilot tested “competitive bidding” in a few small markets. According to program officials, these test projects have shown that, for selected medical products, Medicare has saved money on items priced competitively. As part of these competitive bidding tests, steps were taken to monitor beneficiary access and product quality. To use competitive bidding on a broader scale, Medicare would require not only new authority but would need to make substantial administrative preparations, as competing with a larger number of products nationally would entail bidding in multiple markets and monitoring access and quality once prices had been set. Medicare’s financial challenge is very real. The 21st century has arrived and the demographic tidal wave is on the horizon. Within 5 years, individuals in the vanguard of the baby boom generation will be eligible for Social Security and 3 years after that they will be eligible for Medicare. The future costs of serving the baby boomers are already becoming a factor in CBO’s short-term cost projections. Clearly the issue before us is not whether to reform Medicare but how. I feel the greatest risk lies in doing nothing to improve Medicare’s long-term sustainability. It is my hope that we will think about the unprecedented challenge of facing future generations in our aging society. Engaging in a comprehensive effort to reform the program and put it on a sustainable path for the future would help fulfill this generation’s stewardship responsibility to succeeding generations. Medicare reform would be done best with considerable lead time to phase in changes and before the changes that are needed become dramatic and disruptive. Given the size of Medicare’s financial challenge, it is only realistic to expect that reforms intended to bring down future costs will have to proceed incrementally. We should begin this now, when retirees are still a far smaller proportion of the population than they will be in the future. The sooner we get started, the less difficult the task will be. As we contemplate the forecast for Medicare’s fiscal condition and its implications, we must also remember that the sources of some of its problems—and its solutions—are outside the program and are universal to all health care payers. Some tax preferences mask the full cost of providing health benefits and can work at cross-purposes to the goal of moderating health care spending. Therefore, it may be important to reexamine the incentives contained in current tax policy and consider potential reforms. Advances in medical technology are also likely to keep raising the price tag of providing care, regardless of the payer. Although technological advances unquestionably provide medical benefits, judging the value of those benefits—and weighing them against the additional costs—is more difficult. Consumers are not as informed about the cost of health care and its quality as they may be about other goods and services. Thus, while the greater use of market forces may help to control cost growth, it will undoubtedly be necessary to employ other cost control methods as well. We must also be mindful that health care costs compete with other legitimate priorities in the federal budget, and their projected growth threatens to crowd out future generations’ flexibility to decide which competing priorities will be met. In making important fiscal decisions for our nation, policymakers need to consider the fundamental differences between wants, needs, and what both individuals and our nation can afford. This concept applies to all major aspects of government, from major weapons system acquisitions to issues affecting domestic programs. It also points to the fiduciary and stewardship responsibility that we all share to ensure the sustainability of Medicare for current and future generations within a broader context of providing for other important national needs and economic growth. A major challenge policymakers face in considering health care reforms is the dearth of timely, accurate information with which to make decisions. Medicare’s size and impact on the nation’s health care economy means that its payment methods and rate adjustments, no matter how reasonable, often produce opposition. Recent experience with the payment reforms established in the BBA illustrates this point. In essence, these reforms changed Medicare’s payment methods to establish incentives for providers to deliver care efficiently. BBA’s changes were enacted in response to continuing rapid growth in Medicare spending that was neither sustainable nor readily linked to demonstrated changes in beneficiary needs. Nonetheless, affected provider groups conducted a swift, intense campaign to roll back the BBA changes. In the absence of solid, data- driven analyses, affected providers’ anecdotes were used to support contentions that Medicare payment changes were extreme and threatened their financial viability. This and similar reactions to mandated Medicare payment reforms underscore how difficult it is, without prompt and credible data, to defend against claims that payments changes have resulted in insufficient compensation that could lead to access problems. The public sector can play an important role in educating the nation about the limits of public support. Currently, there is a wide gap between what patients and providers expect and what public programs are able to deliver. Moreover, there is insufficient understanding about the terms and conditions under which health care coverage is actually provided by the nation’s public and private payers. In this regard, GAO is preparing a health care framework that includes a set of principles to help policymakers in their efforts to assess various health financing reform options. This framework will examine health care issues systemwide and identify the interconnections between public programs that finance health care and the private insurance market. The framework can serve as a tool for defining policy goals and ensuring the use of consistent criteria for evaluating changes. By facilitating debate, the framework can encourage acceptance of changes necessary to put us on a path to fiscal sustainability. I fear that if we do not make such changes and adopt meaningful reforms, future generations will enjoy little flexibility to fund discretionary programs or make other valuable policy choices. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or other committee members may have. For future contacts regarding this testimony, please call William J. Scanlon, Director, Health Care Issues, at (202) 512-7114. Other individuals who made key contributions include Linda Baker, James Cosgrove, Jessica Farb, Hannah Fein, James McTigue, Yorick F. Uzes, and Melissa Wolf.
We are pleased to be here today as Congress examines Medicare's financial health and consider the budgetary and economic challenges presented by an aging society. The Comptroller General has been particularly attentive to the sustainability challenges faced by the nation's two largest entitlement programs--Medicare and Social Security--for more than a decade since he served as a public trustee for these programs in the early 1990s. The recent publication of the 2003 Trustees' annual report reminds us, once again, that the status quo is not an option for Medicare. If the program stays on its present course, in 10 years Hospital Insurance (HI) Trust Fund outlays will begin to exceed tax receipts, and by 2026 the HI trust fund will be exhausted. It is important to note that trust fund insolvency does not mean the program will cease to exist; program tax revenues will continue to cover a portion of projected expenditures.1 However, Medicare is only part of the broader health care financing problem that confronts both public programs and private payers. The unrelenting growth in health care spending is producing a health care sector that continues to claim an increasing share of our gross domestic product (GDP). Despite the grim outlook for Medicare's financial future, fiscal discipline imposed on Medicare through the Balanced Budget Act of 1997 (BBA) continues to be challenged, and interest in modernizing the program's benefit package to include prescription drug coverage and catastrophic protection continues to grow. Such unabated pressures highlight the urgency for meaningful reform. As we deliberate on the situation, we must be mindful of several key points. The traditional measure of HI Trust Fund solvency is a misleading gauge of Medicare's financial health. Long before the HI Trust Fund is projected to be insolvent, pressures on the rest of the federal budget will grow as HI's projected cash inflows turn negative and grow as the years pass. Moreover, a focus on the financial status of HI ignores the increasing burden Supplemental Medical Insurance (SMI)--Medicare part B--will place on taxpayers and beneficiaries. GAO's most recent long-term budget simulations continue to show that demographic trends and rising health care spending will drive escalating federal deficits and debt, absent meaningful entitlement reforms or other significant tax or spending actions. To obtain budget balance, massive spending cuts, tax increases, or some combination of the two would be necessary. Neither slowing the growth of discretionary spending nor allowing the tax reductions to sunset will eliminate the imbalance. In addition, while additional economic growth will help ease our burden, the potential fiscal gap is too great to grow our way out of the problem. Since the cost of a drug benefit would boost spending projections even further, adding drug coverage when Medicare's financial future is already bleak will require difficult policy choices that will mean trade-offs for both beneficiaries and providers. Just as physicians take the Hippocratic oath to "do no harm," policymakers should avoid adopting reforms that will worsen Medicare's long-term financial health. Our experience with Medicare--both the traditional program and its private health plan alternative--provides valuable lessons that can guide consideration of reforms. For example, we know that proposals to enroll beneficiaries in private health plans must be designed to encourage beneficiaries to join efficient plans and ensure that Medicare shares in any efficiency gains. We also recognize that improvements to traditional Medicare are essential, as this program will likely remain significant for some time to come.
During the 1960s, in an effort to address the decline in demand for cotton brought on by competition from synthetic fibers, cotton industry organizations proposed legislation to create a federally authorized, industry-funded program aimed at expanding consumers’ demand for cotton. Subsequently, the Cotton Research and Promotion Act of 1966 authorized the creation of the Cotton Board and charged it with increasing cotton’s share of the textile and apparel market through a research and promotion program. The 1966 act gives the Cotton Board the primary responsibility for administering the cotton check-off program, including developing program plans and budgets. The act also directs the Cotton Board to contract with an organization, governed by cotton producers, to carry out its research and promotion activities. Since 1967, that organization has been a nonprofit corporation called Cotton Incorporated. From 1967 to 1991, all domestic producers had to pay cotton assessments. However, the act allowed producers who were not in favor of supporting the program to request a refund. In the late 1980s, about one-third of the assessments collected were refunded. In November 1990, the Congress enacted the Cotton Research and Promotion Act Amendments of 1990, which was included under title XIX, subtitle G, of the Food, Agriculture, Conservation, and Trade Act of 1990 (known as the 1990 Farm Bill). These amendments authorized two fundamental changes in the funding procedures for the cotton check-off program: (1) the imposition of assessments on imported cotton and cotton-containing products and (2) the elimination of refunds. To become effective, however, these revisions had to be approved in a referendum by at least half of the domestic producers and importers voting. About 60 percent of those voting approved these revisions in July 1991. In effect, the approved changes made the program mandatory for both domestic producers and importers. After the final regulation was issued and other administrative procedures were completed, import assessments on cotton products began to be collected on July 31, 1992. The assessments are collected by Customs and remitted to the Cotton Board through AMS on a monthly basis. Domestic producers pay an assessment when they sell their raw cotton. The current cotton assessment is a fixed rate of $1 per 500-pound-bale plus 0.5 percent of the market value. Based on a market value of 60 cents per pound, the total assessment per pound of raw cotton is about one-half cent. Importers pay an assessment on the raw cotton equivalent of imported textiles and apparel. To calculate the assessment rate for imported cotton products, USDA has established procedures for estimating the amount of raw cotton used to manufacture about 700 different cotton products. (See app. I for examples of how AMS calculates rates for an imported cotton product.) Because the check-off program is federally authorized, the Secretary of Agriculture and AMS have certain oversight responsibilities. The Secretary must approve the Cotton Board’s recommended program plans and budgets before they can become effective. AMS’ responsibilities include (1) developing regulations to implement the check-off program, in consultation with the cotton industry, and (2) ensuring compliance with the authorizing legislation and AMS’ orders and regulations. Generally, the act and AMS’ regulations specify allowable activities, such as the type of promotion or research activities, the level and collection of assessments, the composition of the Board, and the types of allowable expenditures. To ensure compliance, AMS reviews the Board’s budgets and projects to, for example, prevent the Board from engaging in prohibited activities, such as lobbying. However, AMS’ oversight role does not include reviewing the program’s effectiveness. AMS is reimbursed by the Cotton Board for its oversight costs. The assessment on cotton imports and the elimination of refunds have contributed, in large part, to the substantial growth in the Cotton Board’s check-off revenues since 1990. In 1990, the Cotton Board received check-off revenues from producers of about $27.6 million after refunds. In fiscal year 1994, the Cotton Board’s check-off assessment revenues totaled about $56.8 million—$43.2 million, or 76 percent, from domestic producers and $13.6 million, or 24 percent, from importers. The imposition of the cotton import assessment has not prevented increases in the U.S. consumption of cotton. Between 1984 and 1991, the U.S. consumption of raw cotton and cotton products grew from 4 billion pounds to 6.2 billion pounds, an average annual growth rate of 6.6 percent. Following the imposition of the cotton import assessment in 1992, the U.S. market continued to grow at about the same rate through June 1995. The U.S. consumption of cotton may exceed 8 billion pounds in 1995. (See fig. 1.) Government and other experts knowledgeable about the U.S. textile and apparel industry agreed that the imposition of the cotton import assessment beginning in July 1992 has had no significant impact on the long-term growth in U.S. consumption of domestic cotton. They pointed out that the relatively small size of the cotton import assessment—about one-half cent per pound of raw cotton equivalent—is likely to have little effect on retail prices. According to these experts, the primary factor explaining the growth in cotton consumption since 1984 is consumers’ increasing preference for cotton apparel—per capita consumption increased from 17 pounds to 30 pounds between 1984 and 1994. They also said that technological developments, such as wrinkle-resistant cotton fabric and different denim finishes, have further enhanced consumers’ preference for cotton apparel. In addition, these experts said that the cotton check-off program has contributed to consumers’ preference for cotton, although they could not cite any study measuring the extent of the program’s contribution. According to USDA’s Chief Economist, a positive correlation generally exists between increased promotion and increased sales of a particular product. However, he also said that researchers measuring this positive correlation have found that it can vary from small to large, depending on the product, the time period involved, and other factors. As discussed in the conference report on the 1990 Farm Bill, some lawmakers were concerned that while importers would be contributing to the check-off program on an equal footing with domestic producers, they would be denied equivalent access to the U.S. cotton market because of tariffs and quotas. According to the USTR, in 1992 the United States maintained quotas for about 67 percent of imported cotton products. Despite these concerns, quotas and tariffs have not prevented cotton imports from sharing in the growth in the U.S. market. Cotton imports have grown even faster than U.S. consumption, increasing from 1.5 billion pounds in 1984 to about 3.8 billion pounds in 1994, an average annual growth rate of about 10 percent. In addition, imported cotton products accounted for 48 percent of U.S. cotton consumption in 1994, up from 37 percent in 1984. Industry experts attribute the growth in these imports primarily to the growing U.S. market for cotton products and lower-priced apparel manufactured in developing countries with low wages. These experts also pointed out that in the absence of quotas and tariffs, cotton imports would probably have increased at an even higher rate, although they could not say by how much. The experts cited several reasons for the increase in cotton imports, even with quotas. First, not all countries are subject to U.S. quotas. Second, countries subject to these quotas vary in the amount of their quota, and the United States has generally agreed to annual increases in the quotas. Third, not all countries fill their quotas. And fourth, when countries do fill their quotas, U.S. retailers and major textile and apparel exporters have become adept at finding alternative sources of supply in countries that have not filled their quotas. The experts also pointed out that current bilateral quotas negotiated under the Multi-fiber Arrangement will be phased out over 10 years under the Uruguay Round agreement, negotiated under the General Agreement on Tariffs and Trade (GATT). Similarly, as a result of the Uruguay Round agreement, the United States has agreed to slightly reduce textile and apparel tariffs to an average of 15 percent over 10 years. However, experts note that tariffs—currently an average of 17 percent of the value of imported apparel—have not prevented cotton imports from increasing even faster than domestic consumption. This increase has occurred because imported apparel apparently has a substantial cost advantage over domestic apparel. According to USTR’s Assistant U.S. Trade Representative for Agricultural and Commodity Policy and officials from the Foreign Agricultural Service’s Tobacco, Cotton, and Seed Division in USDA, the assessment on cotton imports complies with the requirements of U.S. trade agreements. The primary guiding principle of these agreements for imports is that of “national treatment,” which is established in the GATT, Article III, National Treatment on Internal Taxation and Regulation. This principle holds that imports (1) shall not be subject to internal charges that are higher than those applied to like domestic products and (2) shall be treated, under national laws and regulations, as favorably as like domestic products. According to USDA documents and our discussions with officials from the Foreign Agricultural Service and the USTR, the implications of the cotton import assessment were discussed during USDA’s rule-making process for cotton imports in 1991 and during GATT negotiations during 1992. Officials concluded that the cotton import assessment complies with the principle of national treatment because the assessment imposed on importers is the same as the assessment imposed on domestic cotton producers and the assessment is mandatory for both importers and producers. Furthermore, importers have shared in the growth of U.S. cotton consumption as much as domestic producers, as measured by the increasing import share of the U.S. market. During 1991 and 1992, some major importers and foreign countries objected to the U.S. imposition of the check-off assessment on cotton imports. They contended that such an assessment is a nontariff trade barrier, which is contrary to the GATT’s overall objective of reducing trade barriers and liberalizing trade. Some importers also questioned whether they received benefits from the program comparable to those received by domestic producers. However, the USTR and USDA officials said that they were not aware of any country that had filed a formal challenge to the import assessment with the USTR or the World Trade Organization, the arbiter of international trade disputes. Some experts we talked with suggested that challenges may not have been filed because the amount of money involved is insignificant compared with the value of the trade taking place. Import assessments collected in 1994 totaled about $14 million, compared with an estimated value of $19 billion for cotton imports. USDA and USTR officials also told us that they are not concerned about the possibility that other countries could impose check-off assessments on U.S. exports. They pointed out that check-off programs expand market demand within a country, which can increase U.S. exports to that country. Therefore, as long as countries impose such assessments in line with the principle of national treatment, such assessments could have long-term benefits for U.S. exporters. USDA has put in place the necessary framework for administering the cotton check-off program as it relates to assessing imports. However, two significant administrative issues concerning the assessment on imported cotton are unresolved. First, importers are paying assessments on products containing U.S. cotton for which assessments have already been paid. To get an exemption from this assessment, importers must document the U.S. cotton content of imported products, as USDA requires. However, because importers find it difficult to provide such documentation, they rarely use this exemption. Second, importers and producers on the Cotton Board disagree over whether the Board has adequately carried out its responsibility to oversee the activities of Cotton Incorporated. USDA has carried out the activities specified in the 1990 legislation to assess imported cotton products. For example, USDA held a referendum on whether to assess imports and eliminate refunds of assessments. A majority of producers and importers who voted approved assessing imports and eliminating the refund provision. Working with Customs, USDA established procedures for calculating, collecting, and remitting assessments on imported cotton products. USDA also established equivalent assessment rates for imported cotton products; issued relevant orders and regulations governing the program’s operations; established procedures for exempting imports containing U.S. cotton; and provided for the representation of cotton importers on the Cotton Board. Appendix II contains detailed information on the administrative requirements for imports set forth in the 1990 amendments and on the actions taken by USDA to implement them. The 1990 act required USDA to establish procedures to ensure that the domestic cotton used in imported products has been subject only to the one assessment provided for by law and that the assessment has not been paid twice—once when the raw U.S. cotton was sold and again when the same cotton was used in imported textiles and apparel. In response to the statute, USDA and the Cotton Board have developed procedures under which importers can be exempted from the assessment if they can document the domestic cotton content of the articles they import. However, generally cotton importers cannot readily obtain the information needed to document the amount of U.S. cotton in imported products because U.S. cotton is not easily identifiable in imported products. For example, foreign mills may import U.S. cotton and combine it with cotton from other countries to produce cotton products. These products may then be shipped to factories and mixed with other cotton textiles before the final product is exported to the United States. With this complicated flow of cotton products, importers generally cannot document at a reasonable cost which products contain U.S. cotton. Importers, who are primarily retailers, note that the country of origin of the raw cotton contained in their products has generally not been of interest to them and therefore they do not collect such information. Consequently, some importers are paying more in assessments than they should. Using USDA’s Economic Research Service data on the U.S. cotton content in imported cotton products, we estimated that importers are paying import assessments of about $2.1 million annually on cotton products containing U.S. cotton, which should be exempt from the assessment. USDA considered alternatives to use in place of requiring documentation during the rule-making process but decided that they were either inequitable or not practicable. One alternative proposed was an across-the-board reduction in the import assessment rate. USDA believes this alternative disproportionately benefits countries that manufacture cotton products with little U.S. cotton. The other alternative was to adjust the import assessment rate for each country on the basis of the estimated amount of U.S. cotton used in manufacturing cotton products exported to the United States. Customs believes that maintaining different assessment rates for each exporting country is not administratively practicable. Recognizing that the current approach results in double assessments on U.S. cotton, the Cotton Board is exploring the possibility of identifying which foreign mills use mostly U.S. cotton as a way to help learn which imported products contain significant amounts of U.S. cotton. While producers are generally satisfied with the Cotton Board’s efforts to oversee Cotton Incorporated, importers are more critical. In fact, one importer who was a member of the Board’s executive committee resigned from the Board in February 1995, charging that its oversight was inadequate. Importers we spoke with contend that the Cotton Board has relinquished its fundamental oversight responsibility and left important management decisions to Cotton Incorporated. However, by statute, importers are excluded from Cotton Incorporated’s board of directors, thereby leaving importers’ interests unrepresented. More specifically, importers argue that the Cotton Board’s current procedures for approving Cotton Incorporated’s proposed budget amount to “rubber stamping.” They contend that budget submissions do not contain sufficient detail for adequate review. For example, they cite an event that came to their attention only by accident—an annual, one-night public relations event costing an estimated $370,000, which was not identified in the 1995 budget. Importers questioned whether the budget contains other such unidentified items that the Cotton Board should be aware of. Furthermore, these importers said that the Cotton Board’s meetings to review the budget are not conducive to raising “tough-minded, business-oriented” questions about the budget. They attributed this situation, in part, to the fact that the members of both Cotton Incorporated’s board of directors and the Cotton Board are producers nominated by the same state associations. Therefore, producers on both boards know each other. Also, over the course of a few years, former members of Cotton Incorporated’s board of directors may serve on the Cotton Board and vice versa. Equally important, the expertise and experience needed to carry out the cotton check-off program reside primarily with the staff of Cotton Incorporated. For these reasons, the Cotton Board is inclined to accept the plans and budgets submitted and approved by Cotton Incorporated. Producers we spoke with are generally satisfied with the Cotton Board’s oversight and do not see the need to “micromanage” the check-off program, which they believe has had a clear record of success. However, producers also recognize that the Board’s oversight could be strengthened. Therefore, as suggested by the importers, the Cotton Board has agreed to have an outside contractor conduct an overall evaluation of the program. The Board has also agreed to hold a 1-day meeting to begin developing a long-term plan that sets out goals and priorities to guide Cotton Incorporated’s activities. While importers are willing to participate in these efforts, they still believe that producers have not addressed the need for the Cotton Board to play a more assertive role in carrying out its oversight responsibility. In addition to an improved planning process, importers would like to see the Board develop a budget process that allows more time and opportunity to ask in-depth questions about budget expenditures. “(6) the producers and importers that pay assessments to support the programs must have confidence in, and strongly support, the checkoff programs if these programs are to continue to succeed; and “(7) the checkoff programs cannot operate efficiently and effectively, nor can producer confidence and support for these programs be maintained, unless the boards and councils faithfully and diligently perform the functions assigned to them under the authorizing legislation.” Because the cotton check-off program is industry-funded and -operated, AMS has found it to be more effective for the industry than for AMS to assume primary responsibility for deciding how to strengthen the Cotton Board’s oversight role. AMS officials said that they have consciously decided to focus on guiding rather than prescribing the efforts of the Cotton Board to strengthen its oversight. For example, AMS program officials met with the Cotton Board and Cotton Incorporated to discuss the need for more useful and detailed budget information. This approach resulted in an improved budget report for fiscal year 1995. In addition, consistent with its approach of guiding the industry’s efforts, AMS, in October 1995, called for a meeting of the Cotton Board, including staff and representatives of producers and importers, to help resolve the conflict between importers and producers. AMS envisions this meeting, which may be held in early 1996 at the start of the annual budget process, as an opportunity to chart a course of action to better integrate importers into the check-off program. Even if the Cotton Board exerts more oversight, finding common ground between the producers and importers will be difficult. The major importers are large retailers who do extensive brand-name advertising and see little benefit from the research and promotion program’s generic advertising. Importers generally did not want to participate in the program—61 percent of the importers voting in the 1991 referendum opposed the assessment on cotton imports. Also, importers, who are outnumbered 5 to 1 on the Cotton Board and are not represented at all on Cotton Incorporated’s board of directors, find it difficult to influence the program’s direction. Nevertheless, importers told us that they are willing to work with producers to develop an efficient and effective cotton program. However, importers also told us that they would have more influence over the program’s direction and their interests would be better served if they were represented on the board of directors of Cotton Incorporated. AMS officials, producers, the president of the Cotton Board, and the president of Cotton Incorporated told us that they would have no objection to having importers on Cotton Incorporated’s board of directors, but they noted that the authorizing legislation would have to be revised to allow this representation. The cotton check-off program’s promotion efforts have probably contributed to cotton’s growth in the U.S. market. In addition, the U.S. consumption of cotton and the import share of the U.S. cotton market continued to increase following the imposition of the assessment on imported textiles and apparel. The value of this assessment—about one-half cent for a man’s cotton shirt—is not likely to slow consumer demand for cotton. Furthermore, this assessment is in accordance with U.S. international trade agreements, according to USDA and USTR officials. While USDA has established an administrative framework for assessing imported cotton, two major issues raised by importers have yet to be resolved. The first of these issues—double payments on assessments—may be addressed to some extent by current efforts to identify foreign mills that use a significant amount of U.S. cotton. The second issue, however, is more difficult to resolve—the extent of the Cotton Board’s oversight over Cotton Incorporated. While the Cotton Board and AMS are taking steps to address this issue, these efforts do not deal with importers’ lack of representation on Cotton Incorporated’s board of directors. Neither producers nor AMS officials object to including importers on Cotton Incorporated’s board of directors. However, the legislation authorizing the program must be amended to allow such representation. But even if this issue is resolved, developing a cooperative working relationship between producers and importers will be difficult, given their fundamentally different perspectives on the program. To conduct this review, we analyzed data from USDA’s Economic Research Service on U.S. cotton consumption and imports of textiles and apparel for 1984-95. We discussed the results of our analysis and related issues with knowledgeable officials, including USDA’s Chief Economist and the president of the Cotton Board. We also spoke with staff from the International Cotton Advisory Committee, the Department of Commerce’s Office of Textiles and Apparel, and the U.S. International Trade Commission. We discussed U.S. international trade obligations with staff of USDA’s Foreign Agricultural Service and the USTR. Furthermore, we reviewed the relevant legislation and USDA’s orders and regulations pertaining to the cotton check-off program and other relevant documents and studies. To provide information on the administration of the cotton check-off program for imports, we discussed the program’s administration and related issues with officials of USDA’s Agricultural Marketing Service and Customs. We also discussed the program’s administration with the president, the chairman, and the treasurer of the Cotton Board; the president of Cotton Incorporated and the chairman of its board of directors; and representatives of importers on the Cotton Board. We reviewed relevant legislation, regulations, orders, the memorandum of understanding between USDA and Customs, and studies of the cotton check-off program. We also discussed various legal issues with USDA’s Assistant General Counsel for Marketing. We performed our work between July 1995 and December 1995 in accordance with generally accepted government auditing standards. We provided copies of a draft of this report to AMS for its review and comment. We met with AMS’ Cotton Division officials, including the Director, Deputy Director, and Chief of the Research and Promotion Staff. These officials generally agreed with the information discussed and provided some clarifying comments that we have incorporated into the report where appropriate. As agreed with your offices, unless the contents of this report are publicly announced earlier, we plan no further distribution of this report until 7 days from the date of this letter. At that time, we will send copies of this report to the Secretary of Agriculture and other interested parties. Copies will also be made available to others upon request. Please contact me at (202) 512-5138 if you or your staff have any questions. Major contributors to this report are listed in appendix III. This appendix contains two examples of how (1) the import cotton assessment is calculated (including the conversion from pounds to kilograms) and (2) an assessment on a sample cotton import shipment is calculated. The per-kilogram assessment represents the sum of the assessment and the supplemental assessment. An example of how the assessment is calculated follows: One bale = 500 pounds One kilogram = 2.2046 pounds One pound = 0.453597 kilograms The $1-per-bale assessment is converted to kilograms: A 500-pound bale = 226.8 kilograms (500 x 0.453597) The $1-per-bale assessment = $0.002000 per pound (1/500) or $0.004409 per kilogram (1/226.8) The supplemental assessment of 5/10 of 1 percent of the value of the cotton is converted to kilograms: Average price received = $0.683 per pound or $1.5057 per kilogram (0.683 x 2.2046) 5/10 of 1 percent of the average price in kilograms = $0.007529 per kilogram (1.5057 x 0.005) The Cotton Research and Promotion Act Amendments of 1990 set forth administrative implementing procedures for the U.S. Department of Agriculture (USDA) to extend the research and promotion program to cotton imports. Table II.1 lists these procedures and the actions USDA took to implement them. Section 1993 (2) —The Secretary of Agriculture shall, within a period not to exceed 8 months after the date of enactment of the act, conduct a referendum among cotton producers and persons that are cotton importers to ascertain if a majority of those voting approve the proposed amendment to the order. USDA held an implementing referendum during July 17-26, 1991. The proposed amendment was approved by a majority (60 percent) of the importers and producers voting in the referendum. Results were announced in a nationally distributed press release on August 2, 1991. Assessment on imported cotton products Section 1992 (3)— If the proposed amendment of the order implementing the Cotton Research and Promotion Act Amendments of 1990 is approved in the referendum, each importer shall pay assessments on imported cotton products. USDA’s final rule was published in the Federal Register (57 FR 29181) on July 1, 1992. The rule provided for Customs to collect assessments on cotton and cotton products imported into the United States on or after July 31, 1992. Section 1996 (2)—The right of a producer to demand a refund shall terminate if the proposed amendment of the order implementing the Cotton Research and Promotion Act Amendments of 1990 is approved in the referendum. Such right shall terminate 30 days after the date the Secretary of Agriculture announces the results of such referendum if such amendment is approved. Such right shall be reinstated if the amendment should be disapproved in any subsequent referendum. The actual elimination of assessment refunds to cotton producers became effective on September 1, 1991, 30 days after USDA announced the results of the July 1991 referendum. (continued) Importers’ representation on the Cotton Board Section 1992 (2)(B)—An appropriate number of representatives, as determined by the Secretary of Agriculture, of importers of cotton on which assessments are paid, will serve on the Cotton Board. The importers’ representatives shall be appointed by the Secretary of Agriculture after consultation with organizations representing importers, as determined by the Secretary. USDA’s final rule amending the regulations for Cotton Board membership was published in the Federal Register (56 FR 65929) on December 20, 1991. The rule provided for an initial representation on the Cotton Board of four importers. In addition, the rule stated that additional importer members could be added to the Cotton Board after consultation by the Secretary with importer organizations and after consideration of the average annual volume of imported cotton that would be subject to assessment for 5 preceding years. In June 1995, four organizations represented importers: (1) United States Association of Importers of Textiles and Apparel, (2) United States Apparel Industry Council, (3) American Association of Exporters and Importers, and (4) American Import Shippers Association. Import assessment rate comparable to domestic producer rate Section 1992 (3)—The rate of assessment on imports of cotton shall be determined in the same manner as the rate of assessment per bale of cotton handled, and the value to be placed on cotton imports for the purpose of determining the assessment on such imports shall be established by the Secretary of Agriculture in a fair and equitable manner. USDA’ s final rule, published in the Federal Register (57 FR 29181) on July 1, 1992, established a rate of assessment for imported cotton and cotton products that is the same, on a raw-cotton-equivalent basis, as the rate imposed on domestically produced cotton. De minimis amount not subject to assessment Section 1997 (1)(B)—Imported cotton shall not be assessed for any entry having a weight or value less than any de minimis figure as established by regulations. The de minimis figure that is established should minimize the burden in administering the import assessment but still provide for the maximum participation of imports of cotton in the assessment provisions of the act. Section 1205.510 (b)(3) of USDA’s final rule established a de minimis value of $220.99 per line item on Customs entry documentation. Any line item entry in which the value of the cotton contained therein is less than $220.99 is not subject to the assessment. Procedures to ensure cotton content of imported products is not subject to more than one assessment Section 1992 (3)—The Secretary shall establish procedures to ensure that the upland cotton content of imported products is not subject to more than one assessment. Section 1205.510 (b)(5) and (9) of USDA’s final rule (FR 29181, July 1, 1992) automatically exempts textile articles assembled abroad in whole or in part of fabricated components, produced in the United States and articles imported into the United States after being exported from the United States for alterations or repairs. Section 1205.510 (b)(6) of USDA’s final rule allows imported cotton and cotton products, which contain U.S.-produced cotton or cotton other than upland cotton, to be exempted by the Cotton Board. Section 1205.520 of USDA’s final rule allows each importer of cotton or cotton-containing products to obtain a reimbursement on that portion of the assessment that was collected on cotton produced in the United States or cotton other than upland cotton. (continued) Reimbursement of federal agencies’ costs Section 1992 (3)—The order shall provide for reimbursing the Secretary of Agriculture for up to $300,000 in expenses incurred in connection with any referendum, and for up to 5 employee years in administrative costs after an order or amendment thereto has been issued and made effective. The order shall also include a provision for reimbursing any agency in the federal government that assists in administering the import provisions of the order for a reasonable amount of the expenses incurred by that agency. In 1993, USDA billed the Cotton Board for about $128,000 in reimbursable costs (which included first year start-up costs of almost $45,000) associated with collecting import assessments on cotton products. In November 1995, Customs reported costs of about $56,000 for fiscal years 1994 and 1995. Required reports from USDA and Customs Section 1998—Not later than 1 year after imported cotton products are subject to assessment, (1) the Secretary of Agriculture was required to prepare a report concerning the implementation and enforcement of the cotton check-off program and any problems that may have arisen in the implementation and enforcement as it relates to imports and (2) the Customs Service was required to prepare a report concerning its role in the implementation and enforcement as it relates to imports. In August 1993, USDA submitted its report to the Congress. Customs officials were not able to determine whether the agency had prepared such a report. Section 1993 (2)—After the implementing referendum is held, the Secretary of Agriculture will conduct a review once every 5 years to ascertain whether another referendum is needed to determine whether producers and importers favor continuation of the amendment provided for in the Cotton Research and Promotion Act Amendments of 1990. The Secretary is required to make a public announcement of the results of the review within 60 days after each fifth anniversary date of the referendum. Results of Secretary of Agriculture’s review are scheduled to be announced by September 1996. Juliann M. Gerkens, Assistant Director Louis J. Schuster, Project Leader Carol Bray James L. Dishmon, Jr. John F. Mitchell Carol Herrnstadt Shulman The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO assessed the: (1) growth in the U.S. market for cotton and cotton products; (2) extent to which import restrictions have affected importers' ability to take advantage of any growth in the U.S. market; and (3) relevant U.S. international trade obligations and the compliance factors for imported cotton and cotton products. GAO found that: (1) the cotton import assessment has not affected the growth rate of cotton imports; (2) the volume of imported cotton products has increased from 1.5 billion pounds in 1984 to 3.8 billion pounds in 1994; (3) the assessment is in compliance with U.S. trade obligations and is based on the principle of national treatment; (4) the Department of Agriculture (USDA) established an administrative framework for assessing cotton products, held a referendum for cotton producers and importers on whether to assess imports, set an assessment rate equivalent to domestic producer rates, and established collection procedures for cotton products with the Customs Service; (5) cotton importers frequently pay duplicative assessments on cotton products containing U.S. cotton because they have difficulty meeting the exemption criteria; and (6) producers and importers disagree on the management and oversight functions of the Cotton Board.
Taxpayers are to report all cancelled debt, including mortgage debt, excludable from taxable income by completing Form 982, “Reduction of Tax Attributes Due to Discharge of Indebtedness (and Section 1082 Basis Adjustment).” Taxpayers use Part 1 of this form to report reasons why cancelled debt can be excluded from taxable income. Taxpayers who are in bankruptcy or insolvent are to exclude their forgiven mortgage debt under the bankruptcy or insolvency category on the Form 982. Taxpayers with forgiven mortgage debt who are not bankrupt or insolvent are to exclude forgiven mortgage debt under the qualified principal residence category on the Form 982. Thus, these taxpayers may have the ability to pay taxes on forgiven debts because they are not in bankruptcy or insolvent. Lenders report all types of cancelled debts to IRS on Form 1099- C, “Cancellation of Debt.” With some exceptions, a cancelled or modified debt is considered taxable income for taxpayers who are not insolvent or in bankruptcy. Without the Mortgage Forgiveness Debt Relief Act and its extension, millions of homeowners currently facing foreclosure could be liable for income taxes on the discharge of debt on their principal residence. IRS estimates suggest the dollar amount of forgiven mortgage debt excluded from income could be significant. IRS Statistics of Income (SOI) officials estimate that for tax year 2008, the most current tax year for which data are available, about 126,000 to 169,000 returns included a Form 982, excluding a total of about $15.2 billion to $24.6 billion of forgiven debt from taxable income. IRS estimates suggest that for about 61,000 to 93,000 of the returns with a Form 982, forgiven debt for a qualified principal residence was the only type of forgiven debt, and taxpayers excluded about $6.4 billion to $11.8 billion from taxable income. Additionally, because taxpayers excluding multiple types of debt from income are only required to report the total amount being excluded and not the amount for each individual type, IRS lacks data to determine the dollar amount of forgiven mortgage debt excluded for these taxpayers. IRS faces several compliance challenges in administering this complicated tax provision. IRS officials reported that it may be difficult to collect additional taxes on forgiven debts, particularly when taxpayers are already insolvent and defaulting on debts, and that this and other considerations, such as IRS’s return on investment, would affect IRS’s decisions about allocating resources for enforcing this provision. However, as noted above, there is evidence some taxpayers have the ability to pay additional tax if owed, and certain housing market data show that the potential for significant noncompliance with the exclusion of forgiven mortgage debt exists. For example, housing market experts who publish regular foreclosure and delinquency surveys confirmed to us that mortgages on vacation and investment homes may account for a substantial portion of current delinquencies and foreclosures. Over the last 5 years, vacation home and investment property purchases are estimated to have ranged from 40 percent (2005) to 27 percent (2009) of home sales. Current IRS forms provide limited information on mortgage debt forgiveness and IRS is not making full use of all available data. For example, Form 982 does not contain enough information to allow IRS to check for compliance because the form cannot be easily matched against information received from lenders on Form 1099-C. Form 982, Part 1 uses check boxes instead of dollars to report the amount of forgiven debt being excluded. As a result, IRS cannot determine what dollar amounts are being excluded for each type of qualified cancelled debt. Form 1099-C instructions ask lenders to provide an open-ended description of the type of cancelled debt, but do not require the lender to uniformly identify the specific type of cancelled debt. For example, the form does not use a series of check boxes or apply codes so that lenders could select among a list of common cancelled debt types (e.g., mortgage, home equity line of credit, credit card, auto loan, etc.). Neither Form 982 nor Form 1099-C requires the taxpayer or lender to disclose the address of the property secured by the forgiven debt. According to IRS officials, collecting such information might not result in a perfect match in all cases across the two forms. However, it would allow IRS to better determine whether the forgiven debt is for a principal residence. Further, we previously recommended, that IRS consider collecting the address of the secured property on Form 1098, “Mortgage Interest Statement,” for taxpayers deducting mortgage interest to help determine the home’s use and eligibility for the deduction and improve compliance for taxpayers reporting rental real estate activity. IRS agreed to study the issue. Without being able to systematically identify whether the forgiven debt is for a mortgage, IRS also cannot identify taxpayers who may be eligible for the provision, but are not taking advantage of it. IRS is not using available internal or third-party data to determine whether taxpayers with forgiven mortgage debt own multiple homes— also a potential indicator that the forgiven debt is not for a principal residence. Without having an estimate of the extent of noncompliance, IRS is unable to determine whether additional resources should be dedicated to compliance monitoring for mortgage debt forgiveness or if automated compliance checks are needed. At the same time, little concrete information exists to measure the extent to which paid preparers and taxpayers experience difficulty adhering to mortgage debt forgiveness provisions due to the complexity of the law, IRS forms, and instructions. However, anecdotal evidence suggests IRS’s forms and instructions and the related tax laws are difficult to understand. For example, IRS officials acknowledged that the law is confusing and the National Taxpayer Advocate described Form 982 as “technically challenging.” As a result, IRS has taken actions to reduce the complexity associated with filing the Form 982, including revising the form’s instructions and engaging in outreach to paid preparers and software providers on cancelled debt. Currently, the most frequently used commercial software packages provide varying degrees of support for Form 982. In addition, IRS has not explored several low-cost and easy-to- implement options that could help it clarify how to treat forgiven mortgage debt for tax purposes. These options include the following: Releasing to paid preparers, software companies, or taxpayers an existing interactive tool on cancellation of debt which is similar to tools already released for other tax laws in that it enables users to navigate a series of questions about taxpayers’ particular cancelled debt circumstances. IRS officials reported that making this tool publicly available would introduce some additional costs. However, based on our observation of the tool, it may clarify the tax treatment of forgiven debt, including mortgage debt, for tax purposes. Using telephone software to analyze the reasons why taxpayers call IRS with questions about the tax treatment of forgiven mortgage debt. Encouraging software companies to provide more interactive features that would help taxpayers answer a series of questions about more complex cancelled debt situations, and, if applicable, subtract ineligible amounts of debt from the total being excluded from income. IRS is responsible for enforcing complex tax laws and must consider trade-offs when allocating its enforcement resources, such as the ability to collect assessed taxes and return on investment. Deteriorating trends in the housing market have led to an increase in the number and amount of forgiven mortgage debts, which have complex tax consequences. However, IRS is missing opportunities to both identify noncompliance and assist eligible taxpayers in excluding forgiven mortgage debt before the provision expires at the end of 2012. Revising the forms, collecting more information from taxpayers and lenders, and using third-party data would help IRS determine whether taxpayers are correctly excluding mortgage debt from taxable income and whether IRS needs to dedicate additional resources to this issue. Further, providing greater assistance to taxpayers and expanding outreach to stakeholders are low-cost solutions that could help better highlight the potential tax consequences of cancelled debts. We recommend that the Commissioner of Internal Revenue take the following nine actions. To enhance IRS’s ability to detect noncompliance with mortgage debt forgiveness provisions, (1) modify Form 982, Part 1 to segregate the total dollar amount of forgiven debt by exclusion type and capture the information in IRS’s databases; (2) modify Form 1099-C to require lenders to identify in a more useable format (check boxes or coding, for example) the specific type of cancelled debt and capture the information in IRS’s databases; (3) modify the Form 982 and Form 1099-C so that filers disclose the address of the secured property for which the debt is being forgiven and capture the information in IRS’s databases; (4) determine if available data (including IRS and third-party data) would allow IRS to better identify whether the debt being excluded is for a principal residence; and (5) use the additional data reported on the revised Form 982 and Form 1099-C to assess the extent to which taxpayers are compliant. To provide better information for paid preparers and taxpayers to determine eligibility for excluding forgiven mortgage debt from taxable income, explore and implement readily available low-cost options to help clarify the tax treatment of forgiven debt, including options such as (6) make IRS’s interactive tool for cancelled debt publicly available for the (7) use IRS’s telephone software to obtain better information about why, if at all, taxpayers call IRS with questions about forgiven mortgage debt; (8) work with software companies to more fully support complex debt cancellation issues, particularly those related to forgiven mortgage debts; and (9) either send notices to taxpayers when a lender files a Form 1099-C indicating a forgiven mortgage and the taxpayer does not file a Form 982 or document that the costs of doing so would exceed the benefits. We provided a draft of this report to the Commissioner of Internal Revenue. We received written comments from the Deputy Commissioner, Services and Enforcement; his comments are reprinted in appendix II. He stated that IRS agreed with five of the nine recommendations and said that the other four, related to making changes to the Forms 982 and 1099-C and collecting the resulting data, have significant value. However, the Deputy Commissioner raised the question of whether the costs of making the changes would outweigh the benefits and said that before taking action on the four recommendations, IRS would ascertain the costs and benefits. We agree that costs and benefits should be considered, but we are not sure a useful estimate is possible in this case. As our report states and IRS acknowledges, the lack of data presents challenges in estimating the extent of noncompliance and, therefore, the benefits of additional IRS action. The Deputy Commissioner stated that IRS will review a sample of tax returns filed with Form 982 and analyze available third-party data to determine the character of the cancelled debt. However, our report— based on interviews with IRS officials—said that the available third-party data reported on Form 1099-C do not contain information in a format that could help to systematically determine eligibility. Thus, IRS’s review of a sample of tax returns using only currently available data risks understating the benefits of additional information reporting. To avoid the challenge of developing a complete benefit estimate, we recommended that IRS make relatively minor changes to the Forms 982 and 1099-C that would not impose significant additional burden on taxpayers or third parties. By collecting such additional data, albeit at some cost, IRS would be better positioned to determine whether additional resources are needed to monitor compliance with forgiven mortgage debt rules. IRS also provided technical changes to the report, which we incorporated where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We will also send copies to the Commissioner of Internal Revenue, the Secretary of the Treasury, the Chairman of the IRS Oversight Board, and the Director of the Office of Management and Budget. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions or wish to discuss the material in this report further, please contact me at (202) 512-9110 or at whitej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report were Joanna Stamatiades, Assistant Director; Amy Bowser; James Cook; John Dell’Osso; Tom Gilbert; Mark Kehoe; Kirsten Lauber; Patricia MacWilliams; Jessica Thomsen; Benjamin Wories; and Jeff Wojcik. forgiven mortgage debt from taxable income and how effectively is IRS addressing the challenges? 3. What challenges, if any, could taxpayers face in understanding whether forgiven mortgage debt can be excluded from taxable income and what steps can be taken to address these challenges? IRS estimates suggest the dollar amount of forgiven mortgage debt excluded from income could be significant. Although conditions in the current housing market suggest that the potential for significant noncompliance exists, IRS is unable to measure the extent to which taxpayers are compliant with the mortgage debt forgiveness exclusion. Information provided on IRS Forms 982 and 1099-C does not allow IRS to systematically check for noncompliance, nor does IRS require lenders or taxpayers to report the address of the property secured by the mortgage debt being forgiven. Without such information, IRS is unable to determine what additional resources, if any, are needed to ensure compliance. The complexity of tax provisions regarding forgiven mortgage debt, as well as IRS forms and instructions, makes it difficult for taxpayers to determine whether and what portion of forgiven mortgage debt can be excluded from income. However, IRS has not explored several low-cost options that would be relatively easy to implement and would help clarify the tax treatment of forgiven debt for tax purposes including making existing interactive tools available, using existing telephone software, and conducting further outreach to external stakeholders. worked with IRS officials to determine the availability of information related to the tax treatment of forgiven mortgage debt; analyzed IRS data concerning the number and dollar amount of cancelled debts from 2007 through June 2010; analyzed related forms and publications, education and outreach materials, and actions taken by IRS to inform taxpayers, tax software companies, and paid preparers about the tax treatment of cancelled mortgage debt; reviewed how tax software packages from companies that cover 90 percent of the market treat forgiven mortgage debt; and interviewed IRS officials about a variety of issues, and housing market experts from an industry association and a private research company familiar with the current condition of the housing market, including trends in foreclosure and debt cancellation. Includes both debt forgiven through foreclosure and loan modification as long as discharge of debt was due to a decline in the value of the residence or the financial condition of the taxpayer. The mortgage debt must have been used to buy, build, or substantially improve a principal residence and must be secured by the property. made for taxpayers who are insolvent or in bankruptcy. bankruptcy, insolvency, qualified farm indebtedness, qualified real property business indebtedness, qualified Midwestern disaster area indebtedness, or qualified principal residence indebtedness. “Cancellation of Debt.” Joint Committee on Taxation (JCT) estimates originally suggested that the exclusion of forgiven mortgage debt from taxable income may result in about $968 million in federal revenue losses from fiscal year (FY) 2008 through FY 2013 and more recent estimates suggest that the revenue losses could be closer to $1.9 billion.The Department of Treasury estimates suggest that the exclusion may result in federal revenue losses of about $1.4 billion from FY 2008 through FY 2013.2 This suggests that not all taxpayers with forgiven mortgage debt are bankrupt or insolvent and may have the ability to pay taxes on forgiven debts. Taxpayers who are in bankruptcy or insolvent are to exclude forgiven mortgage debt under the bankruptcy or insolvency category on Form 982. Taxpayers with forgiven mortgage debt who are not bankrupt or insolvent are to exclude forgiven mortgage debt under the qualified principal residence category on Form 982. We added revenue loss estimates from the President’s fiscal years 2010 and 2011 budget requests, Feb. 26, 2009, and Feb. 1, 2010, respectively. Based on a sample of 2008 tax returns, IRS Statistics of Income (SOI) officials estimate that for tax year (TY) 2008, about 126,000 to 169,000 returns included a Form 982, excluding a total of about $15.2 billion to $24.6 billion of forgiven debt from taxable income. IRS estimates also suggest that for about 61,000 to 93,000 of the returns with a Form 982, debt for qualified principal residence was the only type of forgiven debt, and taxpayers excluded about $6.4 billion to $11.8 billion from taxable income. Because taxpayers excluding multiple types of debt only report the total amount being excluded, and not individual debt amounts, IRS lacks the data to determine the dollar amount of forgiven mortgage debt excluded for these taxpayers. In the absence of detailed audits, IRS does not know the extent of noncompliance for forgiven mortgage debt. However, we identified several conditions suggesting the potential for significant noncompliance exists. 1. Housing market data show significant amounts of forgiven mortgage debt could be taxable income. Real estate market experts estimate that, in 2010, over 3 million foreclosure filings will take place, while about 1 million homes will be repossessed by lenders.Housing market experts who publish regular foreclosure and delinquency surveys confirmed to us that mortgages on vacation and investment homes may account for a substantial portion of current delinquencies and foreclosures. Over the last 5 years, vacation home and investment property purchases are estimated to have ranged from 40 percent (2005) to 27 percent (2009) of home sales. Taxpayers who own second homes or investment properties may differ in their ability to pay taxes than taxpayers with a single residence. During the height of the housing market, homeowners withdrew increasing amounts of housing-secured equity through refinancing, second mortgages, and lines of credit. Estimates of the amount withdrawn in 2005 range from $301 billion to 515 billion.However, IRS is unable to determine whether the proceeds from these loans were used to buy, build, or substantially improve a principal residence. 2. IRS data show a significant increase in the amount and number of cancelled debts since TY 2007. IRS estimates the amount of cancelled debt reported on Form 1099-C has increased over 10 times—from about $19 billion worth of debt in TY 2007 to about $216 billion worth of debt for TY 2009 (as reported to IRS through June 2010). The number of Form 1099-Cs has increased about 80 percent from about 2 million debts to about 3.6 million debts. However, IRS is unable to identify the extent to which this increase is attributable to foreclosures and mortgage modifications, and particularly to debt attributable to a principal residence. 3. IRS dedicates minimal resources in this area and is unable to report how many returns have been subject to further examination due to cancellation of debt issues, including forgiven mortgage debt. The Automated Underreporter (AUR) program does not pursue underreported income from cancelled debts over certain thresholds based on the assumption that such cancelled debts would be for mortgages and yield little change in the amount of tax owed. Using a rationale similar to AUR, the Wage and Investment examination division does not include cancelled debts or mortgage debt forgiveness as part of the examination process. The Small Business/Self-Employed division may include mortgage cancellations as part of broader audits of taxpayers, including requiring taxpayers to supply supporting documentation related to debt cancellation. forgiven debts, particularly when taxpayers are already insolvent and defaulting on debts, and that this and other considerations, such as return on investment, would affect IRS’s decisions about allocating resources for enforcing this provision. There is evidence that some taxpayers have the ability to pay additional tax, if owed. JCT and Treasury revenue loss estimates suggest that without the exclusion, forgiven mortgage debts would generate federal revenue. Taxpayers selecting the qualified principal residence category on the Form 982 are indicating that they are not in bankruptcy or insolvent because if they were, they would be claiming the exclusion under the “bankruptcy” or “insolvency” category on the Form 982 (as we noted earlier). Several limitations with Form 982 and Form 1099-C make it difficult for IRS to measure noncompliance. Form 982 does not contain enough information to allow IRS to check for compliance because the form cannot be matched against information received from lenders on Form 1099-C. Form 982, Part 1 uses check boxes instead of dollars to report the amount of forgiven debt being excluded. As a result, IRS cannot determine what dollar amounts are being excluded for each type of qualified cancelled debts. Check boxes could be replaced with actual dollar amount of the cancelled debt that the taxpayer is excluding from income. Form 1099-C does not provide information in a format that could help determine eligibility, including what type of debt (mortgage, credit card, car loan, etc.) is being forgiven. Although IRS receives nearly all 1099-C information returns electronically, the information cannot be used by itself to determine whether the cancelled debt is for a mortgage. IRS instructions ask lenders to be as specific as possible when describing the type of debt being forgiven, but do not require lenders to uniformly identify the specific type of cancelled debt. For example, lenders filing 1099-Cs do not select from a list of types of forgiven debt when completing box 4, which describes the type of debt being forgiven. Because box 4 is an open-ended description, IRS is unable to code or quantify cancelled debts by type. Without being able to systematically identify whether the forgiven debt is for a mortgage, IRS also cannot identify taxpayers who may be eligible for the provision, but are not taking advantage of it. type of debt). If the debt is a mortgage, lenders could report which type (e.g., acquisition, refinance, home equity, etc.). Little concrete information exists to measure the extent to which paid preparers and taxpayers experience difficulty adhering to mortgage debt forgiveness provisions due to the complexity of the law, IRS forms, and instructions. However, anecdotal evidence suggests IRS’s forms and instructions and the related tax laws are difficult to understand. For example: IRS officials acknowledged that the mortgage debt forgiveness law is complex. The National Taxpayer Advocate described Form 982 as “technically challenging.” The Center for Responsible Lending (a nonprofit organization that seeks to eliminate abusive financial practices) characterized Form 982 as “a very complicated and difficult form.” multiple types of cancelled debts are reported on Form 982 by both individuals title of Form 982 is difficult to understand – “Reduction of Tax Attributes Due to Discharge of Indebtedness (and Section 1082 Basis Adjustment)”; Form 982 consists of 23 lines with four pages of instructions and includes technical terms such as “basis reduction” and “debt discharged”; and Form 982 instructions attempt to explain a difficult-to-understand “ordering rule” that requires taxpayers to distinguish between qualified and nonqualified debt. Revising Form 982 instructions and Publication 4681, Cancelled Debts, Foreclosures, Repossessions, and Abandonments, to explain the requirements for excluding forgiven mortgage debt. Engaging in outreach to paid preparers and software providers on cancelled debt, including providing presentations and conducting focus groups at tax forums, and issuing press releases and other publications to clarify the tax treatment of forgiven mortgage debt. IRS officials said that paid preparers and software providers have asked few questions about how forgiven mortgage debt should be treated for tax purposes. IRS has not explored several options that would be relatively easy to implement and with some additional cost could help clarify how to treat forgiven mortgage debt for tax purposes. For example, Beginning in March 2010, IRS pilot-tested several interactive tax assistant tools on its Web site (e.g., Child Tax Credit, and Making Work Pay Tax Credit). These tools are similar to commercial tax preparation products. IRS officials reported that the test was successful with a high completion rate for available issues. Further, they expect to expand the number of interactive tools on IRS’s Web site for more complex tax law issues in the 2011 filing season and beyond. Although IRS developed an interactive tool for cancelled debt that is used by IRS telephone and walk-in employees, IRS did not make the tool publicly available in 2010 because it was not part of the pilot test. using contact analytics software (which allows IRS to analyze recorded phone calls) to examine the reasons taxpayers call IRS with questions about forgiven mortgage debt. IRS is in the initial stages of using contact analytics for other purposes, and could leverage contact analytics to help understand why taxpayers are calling about mortgage or cancelled debt. preparers using professional or commercial software. IRS National Account Managers, through regularly scheduled conference calls, discuss issues of mutual interest with tax software companies, including tax law changes, updates to IRS forms and publications, and the upcoming tax filing season. IRS also works with software companies on an ad hoc basis to influence and improve specific guidance provided by tax software regarding complicated tax provisions (e.g., Earned Income Tax Credit eligibility). provide varying degrees of support for Form 982; although the major software packages generally support taxpayers with relatively simple forgiven mortgage debt situations, they provide more limited support for more complex situations, including instances where taxpayers have multiple forgiven debts. Generally, these commercial software packages provide detailed interactive questionnaires or worksheets to calculate other complicated deductions (e.g., what portion of a homeowner’s expenses can be deducted for using a home office). IRS is responsible for enforcing complex tax laws and must consider trade- offs when allocating its enforcement resources, such as the ability to collect assessed taxes and return on investment. Deteriorating trends in the housing market have led to an increase in the number and amount of forgiven mortgage debts, which have complex tax consequences. However, IRS is missing opportunities to both identify noncompliance and assist eligible taxpayers in excluding forgiven mortgage debt before the provision expires in 2012. Revising the forms and using third-party information could provide IRS with more information to determine whether taxpayers are correctly excluding forgiven mortgage debt from income and whether IRS needs to dedicate additional resources to this issue. Providing greater assistance to eligible taxpayers could help ensure that homeowners understand the potential tax consequences of cancelled debts, in particular foreclosures or mortgage modifications. Expanding outreach efforts to external stakeholders, including software providers, could be part of an effort to reduce common types of misreporting related to cancellation of debt (including forgiven mortgages). actions. To enhance IRS’s ability to detect noncompliance with mortgage debt forgiveness provisions, 1. modify Form 982, Part 1 to segregate the total dollar amount of forgiven debt by exclusion type and capture the information in IRS’s databases; 2. modify Form 1099-C to require lenders to identify in a more useable format (check boxes or coding, for example) the specific type of cancelled debt and capture the information in IRS’s databases; 3. modify the Form 982 and Form 1099-C so that filers disclose the address of the secured property for which the debt is being forgiven and capture the information in IRS’s databases; 4. 5. determine if available data (including IRS and third-party data) would allow IRS to better identify whether the forgiven debt is for a principal residence; and use the additional data reported on the revised Form 982 and Form 1099-C to assess the extent to which taxpayers are compliant. 6. make IRS’s interactive tool for cancelled debt publicly available for the 2011 7. use IRS’s telephone software to obtain better information about why, if at all, taxpayers call IRS with questions about forgiven mortgage debt; 8. work with tax return preparation software companies to more fully support complex debt cancellation issues, particularly those related to forgiven mortgage debts; and 9. either send notices to taxpayers when a lender files a 1099-C indicating a forgiven mortgage and the taxpayer does not file a Form 982 or document that the costs of doing so would exceed the benefits.
To assist the growing number of taxpayers facing foreclosure or mortgage restructuring, the Mortgage Forgiveness Debt Relief Act of 2007, and its 3-year extension as part of the Emergency Economic Stabilization Act of 2008, allows taxpayers to generally exclude from taxable income forgiven mortgage debt used to buy, build, or substantially improve a principal residence. Joint Committee on Taxation (JCT) estimates originally suggested that the exclusion of forgiven mortgage debt from taxable income may result in about $968 million in federal revenue losses from fiscal year (FY) 2008 through FY 2013 and more recent estimates suggest that the revenue losses could be closer to $1.9 billion. The Department of Treasury estimates suggest that the exclusion may result in federal revenue losses of about $1.4 billion from FY 2008 through FY 2013. Some taxpayers with forgiven mortgage debts may be bankrupt or insolvent; however, others are not and therefore may have the ability to pay taxes on forgiven mortgage debts. The briefing slides summarize our assessment of the Internal Revenue Service's (IRS) administration of this tax provision. In response to your request, our objectives were to identify 1. the number of taxpayers who have reported the exclusion of forgiven mortgage debt since the program's inception and the dollar amount excluded; 2. the challenges, if any, IRS faces in administering the exclusion and evaluate how effectively IRS is addressing the challenges; and 3. the challenges, if any, taxpayers could face in understanding whether forgiven mortgage debt can be excluded from taxable income and evaluate how to address these challenges. IRS estimates suggest the dollar amount of forgiven mortgage debt excluded from income could be significant. IRS Statistics of Income (SOI) officials estimate that for tax year 2008, the most current tax year for which data are available, about 126,000 to 169,000 returns included a Form 982, excluding a total of about $15.2 billion to $24.6 billion of forgiven debt from taxable income. IRS estimates suggest that for about 61,000 to 93,000 of the returns with a Form 982, forgiven debt for a qualified principal residence was the only type of forgiven debt, and taxpayers excluded about $6.4 billion to $11.8 billion from taxable income. Additionally, because taxpayers excluding multiple types of debt from income are only required to report the total amount being excluded and not the amount for each individual type, IRS lacks data to determine the dollar amount of forgiven mortgage debt excluded for these taxpayers. IRS faces several compliance challenges in administering this complicated tax provision. IRS officials reported that it may be difficult to collect additional taxes on forgiven debts, particularly when taxpayers are already insolvent and defaulting on debts, and that this and other considerations, such as IRS's return on investment, would affect IRS's decisions about allocating resources for enforcing this provision. However, there is evidence some taxpayers have the ability to pay additional tax if owed, and certain housing market data show that the potential for significant noncompliance with the exclusion of forgiven mortgage debt exists. Over the last 5 years, vacation home and investment property purchases are estimated to have ranged from 40 percent (2005) to 27 percent (2009) of home sales. Current IRS forms provide limited information on mortgage debt forgiveness and IRS is not making full use of all available data. For example, 1) Form 982 does not contain enough information to allow IRS to check for compliance because the form cannot be easily matched against information received from lenders on Form 1099-C. Form 982, Part 1 uses check boxes instead of dollars to report the amount of forgiven debt being excluded. As a result, IRS cannot determine what dollar amounts are being excluded for each type of qualified cancelled debt. 2) Form 1099-C instructions ask lenders to provide an open-ended description of the type of cancelled debt, but do not require the lender to uniformly identify the specific type of cancelled debt. For example, the form does not use a series of check boxes or apply codes so that lenders could select among a list of common cancelled debt types (e.g., mortgage, home equity line of credit, credit card, auto loan, etc.). 3) Neither Form 982 nor Form 1099-C requires the taxpayer or lender to disclose the address of the property secured by the forgiven debt. According to IRS officials, collecting such information might not result in a perfect match in all cases across the two forms. However, it would allow IRS to better determine whether the forgiven debt is for a principal residence. Further, we previously recommended, that IRS consider collecting the address of the secured property on Form 1098, "Mortgage Interest Statement," for taxpayers deducting mortgage interest to help determine the home's use and eligibility for the deduction and improve compliance for taxpayers reporting rental real estate activity. IRS agreed to study the issue. 4) Without being able to systematically identify whether the forgiven debt is for a mortgage, IRS also cannot identify taxpayers who may be eligible for the provision, but are not taking advantage of it. 5) IRS is not using available internal or third-party data to determine whether taxpayers with forgiven mortgage debt own multiple homes--also a potential indicator that the forgiven debt is not for a principal residence.
Although jointly financed by the states and the federal government, Medicaid is administered directly by the states and consists of 56 distinct state-level programs. Within broad federal guidelines, each program establishes its own eligibility standards; determines the type, amount, duration, and scope of covered services; and sets payment rates. In general, the federal government matches state Medicaid spending for medical assistance according to a formula based on each state’s per capita income. In fiscal year 2004, the federal contribution ranged from 50 to 77 cents of every state dollar spent on medical assistance. For most state Medicaid administrative costs, the federal match rate is 50 percent. As program administrators, states have primary responsibility for conducting program integrity activities that address provider enrollment, claims review, and case referrals. Specifically, federal statute or CMS regulations require states to collect and verify basic information on potential providers, including whether the providers meet state licensure requirements and are not prohibited from participating in federal health care programs; have an automated claims payment and information retrieval system— intended to verify the accuracy of claims, the correct use of payment codes, and patients’ Medicaid eligibility—and a claims review system— intended to develop statistical profiles on services, providers, and beneficiaries to identify potential improper payments; and refer suspected overpayments or overutilization cases to other units in the Medicaid agency for corrective action and potential fraud cases, generally, to the state’s Medicaid Fraud Control Unit for investigation and prosecution. As noted in our 2004 report, states use a variety of controls and safeguards to stem improper provider payments. For example, states target high-risk providers seeking to bill Medicaid with on-site facility inspections, criminal background checks, and probationary or time-limited enrollment. States also reported using information technology to integrate databases containing provider, beneficiary, and claims information and to increase the effectiveness of their utilization reviews. Various states individually attributed cost savings or recoupments to these efforts valued in the millions of dollars. In contrast, CMS’s role in curbing fraud and abuse in the Medicaid program is largely one of support to the states. As we reported last year, CMS administers two pilot projects—one focused on measuring the accuracy of a state’s Medicaid claims payments (Payment Accuracy Measurement (PAM)) and the other focused on improper billing detection and utilization patterns by linking Medicare and Medicaid claims information (Medi-Medi). CMS also sponsors general technical assistance and information-sharing through its Medicaid fraud and abuse technical assistance group (TAG). In addition, CMS performs oversight of states’ Medicaid fraud and abuse control activities. (See table 1.) A wide disparity exists between the level of resources CMS expends to support and oversee states’ fraud and abuse control activities and the amount of federal dollars at stake in Medicaid benefit payments. In addition, CMS’s organizational placement of staff and lack of strategic planning suggest a limited commitment to improving states’ Medicaid fraud and abuse control efforts. The resources CMS devotes to working with states to fight Medicaid fraud and abuse do not appear to be commensurate with the size of the program’s financial risk. In fiscal year 2005, CMS’s Medicaid staff resources allocated to supporting or overseeing states’ anti-fraud and abuse operations was an estimated 8.1 FTEs—3.6 FTEs at headquarters and 4.5 FTEs in the regional offices. Staff at headquarters are engaged in arranging and conducting the on-site compliance reviews of states’ fraud and abuse control efforts and in information-sharing activities. Staff at the regional offices also participate in the state compliance reviews and respond to state inquiries. Canvassing the 10 regional CMS offices, we found that 7 regions each have a fraction of an FTE and the rest each have less than 2 FTEs devoted to providing assistance on fraud and abuse issues. For example, Region IV—which covers eight states and accounted for $33 billion of federal funds for Medicaid benefits in fiscal year 2004— reported having 1 FTE devoted to Medicaid fraud and abuse control activities. (See table 2.) For fiscal year 2006, CMS’s budget has no line item devoted to Medicaid fraud and abuse control activities. The project to estimate payment error rates known as PAM/PERM (required by statute) and the Medi-Medi pilot project (with benefits accruing to both programs) are financed through a statutorily established fund—the Health Care Fraud and Abuse Control (HCFAC) account. (See table 3.) The HCFAC monies from which these two projects are largely financed are known as “wedge” funds. As CMS’s distribution of these funds varies from year to year, the level of support for fraud and abuse control initiatives is uncertain and depends on the priorities set by the agency. For example, fiscal year 2005 funds allocated from the HCFAC account for PAM/PERM and Medi-Medi were less than half the funds allocated in fiscal year 2004. In contrast, Medicare fraud and abuse control activities at CMS are financed primarily through earmarked funds from another HCFAC component—the Medicare Integrity Program. CMS’s Medicaid compliance reviews are funded through a different source—HHS’s budget appropriation. In fiscal year 2004, the budget for this activity was $26,000, down from $40,000 in fiscal year 2003 and $80,000 in fiscal year 2002. The placement of Medicaid’s antifraud and abuse function in CMS’s organizational structure and a lack of stated goals and objectives suggests a limited institutional commitment to Medicaid fraud and abuse control activities. Currently, two different headquarters offices are charged with working with states on fraud and abuse issues. CMS’s Office of Financial Management staffs the PAM/PERM and Medi-Medi initiatives, while the Center for Medicaid and State Operations (CMSO) staffs the state compliance reviews and TAG functions. Under this organizational structure, the Medicaid fraud and abuse staff in CMSO are not in an optimal position to leverage the resources allocated to the office with responsibility for developing tools and strategies for combating fraud and abuse. As further evidence of the low priority assigned to Medicaid fraud and abuse control, the planning, outreach, and building of staff expertise lacks leadership continuity. From 1997 to 2003, the leadership and funding of CMS’s support for states’ antifraud and abuse efforts resided in a consortium of two regional offices. The consortium led a network of regional fraud and abuse coordinators and state Medicaid representatives, sponsoring telephone conferences and workshops, seminars, and training sessions aimed at sharing best practices for fighting fraud and abuse. Medicaid staff based at headquarters reported to a national network coordinator located at one of the consortium’s regional offices. With the retirement of the national coordinator in 2003, the consortium relinquished its leadership and funding role and the Medicaid antifraud and abuse activities were reassigned to CMSO without additional resources. Since then, no nationwide meetings with state program integrity officials have been held. At the same time, CMS lacks a strategic plan to drive its Medicaid antifraud and abuse operations. Goals for the long term, as well as plans on how to achieve them, have not been specified in any public department or agency planning documents. For example, HHS’s fiscal year 2004 performance and accountability report cited Medicaid’s high risk of payment errors as the department’s management challenge for fighting Medicaid fraud and abuse. To address this challenge, the report cited the PAM/PERM initiative for estimating payment error rates, as this activity is required in federal statute. But there was no mention of any other fraud and abuse support or oversight activities or goals. Similarly, the discussion of Medicaid program integrity in the Administration’s Budget for Fiscal Year 2006 covers activities to curb states’ inappropriate financing mechanisms but makes no mention of federal support or oversight of states’ fraud and abuse efforts. At the agency level, CMS officials were unable to provide any publicly available planning documents specifying short- or long-term Medicaid program goals that target fraud and abuse. The low priority given to CMS activities in support of states’ fraud and abuse control efforts is having serious consequences for current projects. CMS’s distribution of resources may require some activities to be scaled back and others to be eliminated. Specifically, the expansion of the Medi-Medi data match project has been slow, leaving potentially millions of dollars in cost avoidance and cost savings unrealized. This project enables claims data analysts to detect patterns that may not be evident when providers’ billings for either Medicare or Medicaid are viewed in isolation. For example, by combining data from each program, analysts can identify “time bandits,” or providers who bill for more than 24 hours in a single day. As of March 31, 2005, seven states with fully operational projects reported returns to the Medicaid and Medicare programs of $133.1 million in provider payments under investigation, $59.7 million in program vulnerabilities identified, and $2.0 million in overpayments to be recovered. In addition, 240 investigations had been initiated and 28 cases referred to law enforcement agencies. Two additional states, Ohio and Washington, have begun Medi- Medi projects that are expected to be operational later this year. Because of anticipated unmet funding needs, existing Medi-Medi data match activities are in jeopardy of being scaled back considerably. As CMS stated in its fiscal year 2005 second quarter report on Medi-Medi projects, “Eliminating certain Medi-Medi projects in their entirety and/or dramatically reducing the level of effort across all of the projects are among the approaches under consideration. Beyond FY 2006, the entire project will terminate if additional funding is not identified.” Agency officials noted that several additional states have expressed interest in participating but expanding the program to more states will not occur without a new allocation or realignment of resources. Plans for additional activities that involve coordination with Medicare have been put on hold, pending budget decisions. These include enhanced oversight of prescription drug fraud when Medicare begins covering Medicaid beneficiaries’ drug benefits in 2006 and the use of a unified provider enrollment form instead of separate forms for Medicare and Medicaid. Similarly, CMS’s role as provider of technical assistance and disseminator of states’ best practices has been severely limited because of competing priorities. At a health care fraud and abuse conference sponsored by HHS and the Department of Justice in 2000, participants from states and CMS regional offices articulated their common unmet needs with regard to fraud and abuse technology. The top three areas cited were information- sharing and access to data; training in data analysis and use of technology; and staffing, hardware, and software resources. CMS has not sponsored a national conference with state program integrity officials since 2003 and has not sponsored any fraud and abuse workshops or training since 2000. According to a CMS official, such information-sharing and technical assistance activities would not be expensive to support—less than $100,000 annually—and could result in returns that would exceed this relatively low amount. Resource shortages also account for CMS’s limited oversight of states’ Medicaid prevention, detection, and referral activities for improper payments. Since January 2000, CMS’s Medicaid staff from headquarters and regional offices have been conducting compliance reviews of about seven to eight states a year. The reviews are aimed at ensuring that states have processes and procedures in place, in compliance with federal requirements for enrolling providers, reviewing claims, and referring cases. These compliance reviews have been effective at identifying weaknesses in states’ efforts to combat fraud and abuse. For example, in the course of these reviews, CMS has found instances in which a state had no process in place to prevent payments to excluded providers, states did not use their authority to evaluate providers’ professional or criminal histories as part of the provider enrollment process, and a state did not follow appropriate procedures for referring a case to state law enforcement authorities. States have reported making positive modifications in their programs as a result of the CMS compliance reviews. Nevertheless, at the currently scheduled pace, states’ programs will be reviewed once in 7 years at the earliest. Because the compliance reviews are infrequent, CMS’s knowledge of states’ fraud and abuse activities is, for many states, substantially out- of-date at any given time. Relatively few and questionably aligned resources and an absence of strategic planning underscore the limited commitment CMS has made to strengthening states’ ability to curb fraud and abuse. Despite the millions of dollars CMS receives annually from a statutorily established fund for fraud and abuse control, the agency has not allocated these resources to sufficiently fund initiatives that can help states increase the effectiveness of their Medicaid fraud and abuse control efforts. Developing a strategic plan for Medicaid fraud and abuse control activities would give CMS a basis for providing resources that reflect the financial risk to the federal government. We discussed facts in this statement with a relevant CMS official. He noted that CMS does not view fraud and abuse control activities as separate from its financial management responsibilities. He indicated that CMS has invested substantial resources in program integrity activities that focus on the financial oversight of the Medicaid program. While we agree that financial oversight of Medicaid is a key component of program integrity, we maintain that the other component—fraud and abuse control activities—warrants a greater commitment than it currently receives. Mr. Chairman, this concludes my prepared remarks. I would be happy to answer any questions that you or other Members of the Committee may have. For further information regarding this testimony, please contact Leslie G. Aronovitz at (312) 220-7600. Hannah Fein, Sandra Gove, and Janet Rosenblad contributed to this statement under the direction of Rosamond Katz. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Today's hearing addresses fraud and abuse control in Medicaid, a program that provides health care coverage for eligible low-income individuals and is jointly financed by the federal government and the states. In fiscal year 2003, Medicaid covered nearly 54 million people and the program's benefit payments totaled roughly $261 billion, of which the federal share was about $153 billion. States are primarily responsible for ensuring appropriate payments to Medicaid providers through provider enrollment screening, claims review, overpayment recoveries, and case referrals. At the federal level, the Centers for Medicare & Medicaid Services (CMS) is responsible for supporting and overseeing state fraud and abuse control activities. Last year, GAO reported that CMS had initiatives to assist states, but the dollar and staff resources allocated to oversight suggested that CMS's level of effort was disproportionately small relative to the risk of federal financial loss. Concerned about the stewardship of federal Medicaid funds, Congress has raised questions about CMS's commitment to Medicaid fraud and abuse control. This statement focuses on (1) the level of resources CMS currently applies to helping states prevent and detect fraud and abuse in the Medicaid program and (2) the implications of this level of support for CMS fraud and abuse control activities. Since GAO reported last year, the resources CMS expends to support and oversee states' Medicaid fraud and abuse control activities remain out of balance with the amount of federal dollars spent annually to provide Medicaid benefits. In fiscal year 2005, CMS's total staff resources allocated to these activities was about 8.1 full-time equivalent (FTE) staffing units--approximately 3.6 FTEs at headquarters and 4.5 FTEs in the regional offices. Among CMS's 10 regional offices--each of which oversees states whose Medicaid outlays include billions of federal dollars--7 offices each have a fraction of an FTE and the rest each have less than 2 FTEs allocated to Medicaid fraud and abuse control efforts. Moreover, the placement of the Medicaid fraud and abuse control staff at headquarters--apart from the agency's office responsible for other antifraud and abuse activities--as well as a lack of specified goals for Medicaid fraud and abuse control raise questions about the agency's level of commitment to improve states' activities in this area. CMS's support and oversight initiatives include a pilot project for states to enhance claims scrutiny activities by coordinating with the Medicare program. Despite the project's positive results in several states, less than one-fifth of the states currently participate in the project and resource constraints may require CMS to scale back these efforts instead of expanding them to additional states that are seeking to participate. Similarly, CMS's support activities--such as conducting national conferences, regional workshops, and training--have been terminated altogether. The frequency of CMS's on-site reviews of states' fraud and abuse control activities--about seven to eight visits a year--has not changed since GAO reported on this last year. This means that federal oversight of a state's Medicaid program safeguards will not occur, at best, more than once every 7 years. Relatively few and questionably aligned resources and an absence of strategic planning underscore the limited commitment CMS has made to strengthening states' ability to curb fraud and abuse. Despite the millions of dollars CMS receives annually from a statutorily established fund for fraud and abuse control, the agency has not allocated these resources to sufficiently fund initiatives that can help states increase the effectiveness of their Medicaid fraud and abuse control efforts. Developing a strategic plan for Medicaid fraud and abuse control activities would give CMS a basis for providing resources that reflect the financial risk to the federal government. In discussing the facts in this statement with a CMS Medicaid official, he stated that the agency does not view antifraud and abuse initiatives as separate from financial oversight, an area that has received substantial resources in recent years. While we agree that financial management is important to program integrity, we believe that an increased commitment to helping states fight fraud and abuse is warranted.
Early in the 2000 Census cycle, the U.S. Census Bureau was researching coverage measurement options for the 2000 Census, including the Post Enumeration Survey (PES) methods used in past decennial censuses. The bureau explored a number of design options aimed at improving data accuracy while controlling costs. In 1993, the bureau was also evaluating the feasibility of conducting a one-number census, which combines the features of both the traditional head count and statistical methods to produce a single count before the mandated deadlines. In May 1995, the bureau announced that it would conduct a sample survey of 750,000 housing units, called Integrated Coverage Measurement (ICM), to estimate how many housing units and people it would miss or count more than once in the 2000 Census. In this initial design for the 2000 Census, the bureau planned to use statistical methods to integrate the results of this survey with the traditional census enumeration to provide a one-number census by December 31, 2000. The U.S. Supreme Court ruled in January 1999 that 13 U.S.C. 195 prohibited the use of statistical sampling to generate population data for reapportioning the U.S. House of Representatives. However, the court’s ruling did not prohibit the use of statistical sampling for other purposes, such as adjusting formulas to distribute billions of dollars of federal funding to state and local governments. Following the Supreme Court ruling, the bureau abandoned certain statistical aspects of the ICM program, and announced the A.C.E. program to assess the quality of the population data collected in the 2000 Census, using a smaller sample of 300,000 housing units. The bureau conducted A.C.E., which corresponded to the PES in past censuses and the ICM in the original 2000 Census Plan, to measure and correct the overall and differential coverage of the U.S. resident population in the 2000 Census. Although A.C.E. was generally implemented as planned, the bureau found that A.C.E. overstated the census net undercount. This was due, in part, to errors introduced during matching operations and from other remaining uncertainties. The bureau has reported that additional review and analysis would be necessary before any potential uses of A.C.E. data could be considered. Due to uncertainties or errors in the A.C.E. survey results, the acting director of the bureau decided in separate decisions in March 2001 and October 2001 that the 2000 Census tabulations would not be adjusted for any purpose, including distribution of billions of dollars in federal funding. These decisions were consistent with those for the 1990 Census, which was not adjusted due to other problems. According to senior bureau officials, the bureau is continuing to evaluate issues related to A.C.E. and the census, and the results of its evaluation are expected to influence the bureau’s planning for the 2010 Census. The bureau receives two appropriations from the Congress: (1) salaries and expenses and (2) periodic censuses and programs. The salaries and expenses appropriation provides 1-year funding for a broad range of economic, demographic, and social statistical programs. The periodic censuses and programs appropriation includes primarily no-year funding to plan, conduct, and analyze the decennial censuses every decade and for other authorized periodic activities. Since fiscal year 1996, the bureau has prepared its annual budget request for the 2000 Census in eight broad frameworks of effort that were submitted to the Office of Management and Budget (OMB) and the Congress. For management, program, financial, staffing, and performance purposes, frameworks are further divided by the bureau into activities and then projects within these activities. The bureau accounts for the costs of conducting the ICM/A.C.E. programs in its Commerce Administrative Management System (CAMS), which became operational in fiscal year 1997. Bureau financial management reports generated by CAMS have provided appropriated amounts, expended and obligated amounts, and variances to a project level from fiscal year 1997 to the current period. The ICM/A.C.E. programs are an activity comprised of eight projects contained within three frameworks. Fiscal year 1996 was the first year the bureau set up a specific project code to identify ICM program costs through fiscal year 1999. However, it was difficult to identify the change to the A.C.E. program beginning in fiscal year 2000 because the bureau did not change many of the project descriptions in CAMS from the ICM program. As discussed in our December 2001 report, we identified specific control weaknesses for fiscal year 2000 related to the lack of controls over financial reporting and financial management systems. To meet the objective of responding to seven questions concerning ICM/A.C.E. program life cycle costs, we reviewed and analyzed budget and program data for all coverage measurement programs that existed during the 2000 Census (for fiscal years 1991 to 2003), which included the ICM and A.C.E. programs. We did not audit budget and other financial data provided by the bureau. We also reviewed planning and methodology documents and other available information in order to determine the history of the programs. Also, we identified ICM and A.C.E. project accounts and analyzed amounts by fiscal year using the financial management reports generated by CAMS. We discussed the results of our analysis with senior bureau officials and interviewed bureau officials to obtain their views and observations regarding the ICM and A.C.E. programs. It was not our objective to assess the efficiency of expenditures and obligations against planned budget appropriations. We encountered several limitations in the scope of our work on this assignment as follows. We were unable to determine the complete life cycle costs of the ICM/A.C.E. programs because the bureau considered any ICM/A.C.E. related costs from fiscal years 1991 through 1995 as part of its general research and development programs and thus did not separately track these costs. Although some costs were tracked in fiscal year 1996, the bureau still considered these costs as research and development and did not include these costs as ICM/A.C.E. program costs. We were further unable to identify ICM/A.C.E. portions of costs, such as evaluations and data processing, which the bureau included with other 2000 Census programs. Our work was performed in Washington, D.C. and at U.S. Census Bureau headquarters in Suitland, Maryland, from February 2002 through July 2002. Our work was done in accordance with U.S. generally accepted government auditing standards. On November 17, 2002, the Department of Commerce provided written comments on a draft of this report and we have reprinted the comments in appendix II. Technical comments were also provided by the department and incorporated into the report where appropriate, but have not been reprinted. Although the bureau tracked some costs of conducting the ICM/A.C.E. programs, we found that the bureau did not identify the complete life cycle costs of the programs due to the following three factors. First, the bureau only tracked the costs of conducting the ICM/A.C.E. programs, which covers the period from fiscal year 1997 through 2003. Although life cycle costs for the 2000 Census cover a 13-year period from fiscal years 1991 through 2003, senior bureau officials said that the ICM/A.C.E. program was not viable for implementation until fiscal year 1997. Therefore, the bureau considered costs from earlier years as part of its general research and development programs and the bureau did not assign unique project codes to identify ICM/A.C.E. programs and related costs in its financial management system. Second, although $3.6 million of fiscal year 1996 obligated costs were identifiable in the bureau’s financial management system as an ICM special test, the bureau did not consider these costs as part of the ICM/A.C.E. programs. Instead, these costs were considered general research and development. However, because the bureau separately identified these costs as ICM program costs, we have included the $3.6 million as part of the ICM/A.C.E. program costs we could identify in this report. Finally, we were unable to identify the ICM/A.C.E. portions of costs, such as evaluations and data processing, which the bureau included with other 2000 Census programs. For example, in late fiscal year 2000 and after, the bureau did not separate A.C.E. evaluations from its other 2000 Census evaluations in its financial management system. Bureau officials stated that the contracts for evaluations included overall 2000 Census and A.C.E. evaluations, and did not have a separate code identifying A.C.E. costs. Similarly, the bureau did not capture all costs for items such as data processing by programs like ICM/A.C.E. These type of operations were conducted for the 2000 Census overall, were budgeted by framework, were not separated by program in the bureau’s financial management system, and were not allocated back to individual projects. Therefore, we were unable to identify these types of costs for the ICM/A.C.E. programs. Due to the limitations in the bureau’s data, our responses to the seven specific questions identified in your request do not include all ICM/A.C.E. life cycle costs and are limited to available cost information covering fiscal years 1996 through 2003, except where indicated, and exclude such costs as A.C.E. evaluations and some data processing. The following sections include our responses to the seven questions on ICM/A.C.E. program life cycle costs. 1. What were the original estimated life cycle costs for the ICM/A.C.E. programs? The bureau originally estimated the costs of conducting the ICM program to be about $400 million when it planned to use statistical methods to integrate the results of a survey based on 750,000 housing units with the traditional census enumeration to provide a one-number census. This original estimate included fiscal years 1997 through 2003. However, this estimate was incomplete, as the bureau did not include program costs prior to fiscal year 1997 because it considered them as general research and development costs. The bureau also combined costs for A.C.E. evaluation and data processing with other program costs in different frameworks. The U.S. Supreme Court ruled in January 1999 that statistical sampling could not be used to generate population data for reapportioning the House of Representatives. As a result of the ruling, in June 1999, as part of its amended fiscal year 2000 budget request, the bureau decreased the ICM/A.C.E. program by about $214 million, due to a reduction in the sample size from 750,000 to 300,000 housing units. We could not identify from bureau records an original estimate for only A.C.E. life cycle costs for completing the scaled-back survey. 2. What was the source and support for $400 million in life cycle costs reported by the bureau for the ICM/A.C.E. programs? In 1995, the bureau estimated life cycle costs for the 2000 Census in 13 frameworks; however, bureau documents did not break out the frameworks into activities and projects. The first evidence for the $400 million cost estimate for conducting the ICM/A.C.E. program for the 2000 Census was submitted as part of the original fiscal year 2000 budget justification for overall census operations to the Congress in February 1999. This original budget was prepared based on the initial design for ICM, which planned to incorporate statistical methods to integrate the results of a survey based on 750,000 housing units with the traditional census enumeration to provide a one-number census. 3. How were ICM/A.C.E. program costs estimated? According to bureau officials, estimates of ICM/A.C.E. costs are based on assumptions about the needs for headquarters and support staff and related benefits, contractual services, travel, office space, and equipment costs necessary to conduct and support operations of the program. The bureau used an electronic cost model to calculate many of the estimates for the ICM/A.C.E. programs. For personnel costs, the A.C.E. program costs were divided into costs for data collection and costs for headquarters full-time equivalent (FTE) staff and support staff as follows. The A.C.E. field staff needed to conduct each A.C.E. data collection operation included enumerators, crew leaders, field operations supervisors, and assistants. The cost model was designed to estimate the number of field staff positions, hours, FTEs, salary costs, and mileage costs. In the cost model, each operation had its own distinct production assumptions based on the data collection needs for that operation. Based on operational needs, the bureau determined the assumptions for production rates, mileage rates, production and training days, and hours worked per day. The magnitude of the A.C.E field production labor was determined by the A.C.E. field operation workload. Based on the workload, the number of A.C.E. enumerators was calculated for each operation. Then, based on the number of enumerators, the bureau determined the number of crew leaders, field operations supervisors, and assistants needed. The number of production positions became the bureau’s basis for the number of staff to be trained. The number of positions, both production and trainee, was then used to estimate the salary cost as a function of the total production and trainee hours and applicable labor rates. Once labor rates were determined, a percentage was used to calculate benefit costs. For nonpersonnel costs, the bureau estimated the costs based on the following. Contract costs were estimated based upon procurement needs for goods and services, including contractors hired to assess the feasibility of A.C.E. operations and to evaluate the results of the program. Travel costs were estimated using the numbers of production and trainee positions to calculate the average miles per case and the mileage reimbursement rate. Office space estimates were based on the number of people who needed space, the number of square feet per person, and the cost per square foot. Equipment and supply costs were based on the needs of each employee and the specific needs of each A.C.E. operation. This included laptop computers that were provided to field data collection staff to conduct interviews and to monitor the operational progress of the program. 4. How much did Census budget for the ICM/A.C.E. programs? As shown in figure 1, we identified from bureau records budgeted amounts of $276.5 million for conducting the ICM/A.C.E. programs. Of this amount, $64.2 million was for the ICM program from fiscal year 1996 through 1999, and $212.3 million was for the A.C.E. program from fiscal year 2000 through 2003. Also, see table 1 in appendix I for additional details of ICM/A.C.E. budgeted costs by framework and project. 5. What were the obligated life cycle costs for the ICM/A.C.E. programs? As shown in figure 2, we identified from bureau records obligated amounts of $206.9 million, of which $58.4 million was for the ICM program from fiscal year 1996 through 1999, and $148.5 million was for the A.C.E. program for fiscal years 2000 and 2001. We did not include obligated costs for fiscal year 2002 as they are not yet final and fiscal year 2003 obligations have yet to be incurred. Also, see table 2 in appendix I for additional details of obligated costs for the ICM/A.C.E. programs. As shown in figure 3, of the $206.9 million of obligated ICM/A.C.E. program costs through fiscal year 2001, 65 percent or about $135 million were for salaries and benefits. The next largest category was for contractual services, which constituted about $22.3 million or 11 percent of ICM/A.C.E. program costs. The third largest category was for equipment, which constituted about $22 million, or 11 percent of ICM/A.C.E. program costs. Other costs - including office space, travel, and supplies - made up about $27.6 million or 13 percent of program costs. 6. Were there any budgeted funds for the ICM/A.C.E. programs not used as of the end of fiscal year 2001, and if so, how much? About $57.7 million of budgeted funds that we identified from bureau records for the ICM/A.C.E. programs were not obligated through fiscal year 2001. For fiscal years 1996 and 1997, there were no unused funds for the ICM program. For fiscal year 1998, about $2.7 million remained unobligated for the ICM program because of the following reasons. $1.5 million was due to the dress rehearsal housing unit follow-up workload being smaller than anticipated; a bureau bonus program not being implemented although budgeted; and less mileage reimbursement than budgeted under project code 6205 (ICM Dress Rehearsal). $400,000 was due to unused budgeted funds for salaries related to project code 6352 (ICM Coverage Measurement). $400,000 was due to unused budgeted funds for salaries and a delay in awarding contract services under project code 6444 (ICM Procedures and Training). $400,000 was due to unused budgeted funds for regional office manager and assistant manager salaries and travel costs due to delays in hiring under project code 6480 (ICM Collection). For fiscal year 1999, about $3.6 million budgeted for the ICM program remained unobligated due to the following reasons. About $2.3 million related to project code 6480 (ICM Collection) was not used, including $1.6 million due to unspent salaries related to hiring delays, hiring fewer staff than authorized for selected positions, and hiring qualified candidates at less than budgeted levels. Another $0.7 million was due to less mileage reimbursement than budgeted. About $1.2 million was due to equipment costs and hardware for 2000 being less than budgeted under project code 6608 (ICM Processing). For fiscal year 2000, about $42.5 million remained unobligated for the A.C.E. program, consisting of almost $40 million for project code 6480 (A.C.E. Collection), which was budgeted for the program but was not used primarily because of the following reasons. About $32 million was due to unspent salaries and benefits for office staff in field offices from hiring fewer positions and hiring at lower grades than budgeted and from lower data collection costs due to a reduction in cases requiring personal visits. About $4 million was due to the fact that contract obligations for laptop computers and support services were less than budgeted. About $2 million resulted from lower GSA rents than budgeted. For fiscal year 2001, about $8.9 million of unobligated funds remained for the A.C.E. program, consisting mostly of $6.4 million for project code 6480 (A.C.E., Collection), which was budgeted for the program but was not used. 7. What were the ICM/A.C.E. program-related costs for the bureau dress rehearsal in fiscal year 1998? As shown in appendix I, the ICM program-related costs for the 1998 dress rehearsal were captured under project code 6205 (ICM Dress Rehearsal). Of the total $10.8 million budgeted, we were able to identify obligations of $9.4 million from bureau records. Most of these obligations were incurred in fiscal year 1998, with some follow-up amounts in the first quarter of fiscal year 1999. According to bureau officials, the dress rehearsal project activities included data collection and case management, data processing, and implementation of estimation operations. This project also covered the implementation of ICM and some elements of A.C.E. at the Sacramento, California, and Menominee County, Wisconsin dress rehearsal sites. The Department of Commerce comments expressed disagreement with how we presented answers to the seven questions in the report, but did not comment on the substance of our answers. It said that our report’s conclusions imply financial management or reporting failures and suggest specific control weaknesses in the bureau’s financial management systems. It also said we inferred an inability to properly manage from large unexplained discrepancies between budgeted and obligated amounts for the ICM/A.C.E. programs. Our answers were not prepared with the intent of drawing conclusions beyond the information presented and we did not make interpretive conclusions or qualitative judgments about the ICM/A.C.E. programs. Although not within the scope of this report, our December 2001 report identified internal control weaknesses for fiscal year 2000 related to the bureau’s lack of controls over financial reporting and financial management systems. The department’s written comments and our more detailed evaluation of its concerns are presented in appendix II. As agreed with your offices, unless you announce its contents earlier, we plan no further distribution of this report until 7 days after its issuance date. At that time, we will send copies to the Chairman and Ranking Minority Member of the Senate Committee on Governmental Affairs, the House Committee on Government Reform, and the House Subcommittee on Civil Service, Census, and Agency Organization. We will also send copies to the Director of the U.S. Census Bureau, the Secretary of Commerce, the Director of the Office of Management and Budget, the Secretary of the Treasury, and other interested parties. This report will also be available on GAO’s home page at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-9095 or by e-mail at kutzg@gao.gov or Roger R. Stoltz, Assistant Director, at (202) 512-9408 or by e-mail at stoltzr@gao.gov. A key contributor to this report was Cindy Brown-Barnes. The following are GAO’s comments on the letter dated October 17, 2002, from the Department of Commerce. 1. Our report does not make interpretive conclusions or qualitative judgments about the ICM/A.C.E. programs. With bureau assistance, we compiled unaudited budgeted and obligated amounts for projects that the bureau reported in its financial management system as being ICM/A.C.E. related. Our review of these reported costs indicated that life cycle costs of the ICM/A.C.E. programs were not complete due to three factors as discussed in the body of our report. One of the factors we cited that contributed to incomplete life cycle costs was $3.6 million of fiscal year 1996 obligated costs for an ICM special test. In its comments, the bureau pointed out that prior to fiscal year 1996 it had not defined the coverage measurement program, did not allocate any expenditures to the ICM project codes, and could not identify any costs prior to fiscal year 1996. Thus, it was the bureau’s decision to not track specific costs during this time period and to consider them as general research. We also stated that $57.7 million of budgeted funds were not obligated or spent through fiscal year 2001, and, with input from bureau officials, we obtained reasons why these funds were not spent. The bureau did not take exception to these facts in its response and we noted no improprieties in this report. Regarding a reference to specific control weaknesses in its financial management systems, the scope of this report did not include an assessment of internal control weaknesses in the bureau’s financial management systems. However, in a December 2001 report, we identified specific internal control weaknesses for fiscal year 2000 related to the bureau’s lack of controls over financial reporting and financial management systems.2. We still disagree with the bureau on this point, as we stated in the draft report. Because these costs were separately tracked by a specific ICM project code in the bureau’s financial management system, we included them in the costs of the ICM/A.C.E. programs that we could identify. 3. We did not cite discrepancies between the $400 million original cost estimate of the ICM/A.C.E. programs provided in early 1999 and the $277 million budgeted amount we identified for fiscal years 1996 through 2003. An objective of our report was to determine what were the original estimated life cycle costs for the ICM/A.C.E. programs. The earliest amount that we could identify from bureau records was $400 million and in our report we explained that this amount was estimated by the bureau before the January 1999 Supreme Court decision. As a result of this decision and as disclosed in our report, the bureau decreased the ICM/A.C.E. program by about $214 million due to a reduction in the sample size from 750,000 to 300,000 housing units. 4. We did not suggest that the difference between $277 million of budgeted life cycle costs and $207 million of obligated life cycle costs demonstrated the bureau’s inability to properly manage and record expenditures relating to the ICM/A.C.E. programs. As presented in our report, the budgeted amount of $277 million included fiscal years 1996 through 2003 and the obligated amount of $207 million included amounts for 2 fewer fiscal years (1996 through 2001). As the bureau pointed out in its response, it is too soon to determine obligated amounts for fiscal years 2002 and 2003 that were budgeted for $12.5 million. Variances for the remaining $57.7 million of unspent funds are discussed in comment 1. 5. The bureau agreed that it did not capture the life cycle costs of evaluations for the ICM/A.C.E. programs because evaluations for all 2000 Census programs were charged to one project code. However, the bureau believes that data processing costs were included in the life cycle costs of the ICM/A.C.E. programs and stated that not being able to identify portions of these costs is not demonstrative of a financial management or reporting failure. We agree with the bureau that some data processing costs were captured in the life cycle costs of the ICM/A.C.E. programs as evidenced by project codes for ICM/A.C.E. data processing for procedures, training, and processing as part of Framework 5. However, we do not believe that all data processing costs were included. Similar to evaluation costs, the bureau attributed much of its computer hardware and support costs to overall 2000 Census programs, and did not allocate costs to specific projects or programs. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports Order GAO Products heading.
To assess the quality of the population data collected in the 2000 Census, the U.S. Census Bureau conducted the Accuracy and Coverage Evaluation (A.C.E.) program, which focused on a survey of housing units designed to estimate the number of people missed, counted more than once, or otherwise improperly counted in the census. GAO reviewed the life cycle costs of the A.C.E. program and its predecessor, the Integrated Coverage Measurement (ICM) program. GAO found that the original estimated cycle costs of conducting the ICM/A.C.E. programs were $400 million. The first evidence for the original $400 million estimate is in the original budget justifications for fiscal year 2000. The bureau based its estimates of ICM/A.C.E. costs on assumptions about the needs for personnel and benefits, contractual services, travel, office space, equipment, and other costs necessary to conduct and support operations of the programs. The budgeted amounts that GAO identified from bureau records for conducting the ICM/A.C.E. programs are $277 million through fiscal year 2003. The obligated costs that GAO identified from bureau records for conducting the ICM/A.C.E. programs are $207 million through fiscal year 2001. $58 million of budgeted funds for the ICM/A.C.E. programs that GAO identified from bureau records were not obligated through fiscal year 2001. The ICM/A.C.E. program-related costs that GAO identified from bureau records for the 1998 dress rehearsal were $11 million budgeted and $9 million obligated.
The Countering Iran in the Western Hemisphere Act of 2012 directed the Secretary of State to conduct an assessment of the threats posed to the United States by Iran’s growing presence and activity in the Western Hemisphere, and to submit a strategy to address Iran’s growing hostile presence and activity in the Western Hemisphere. We identified 12 broad elements in the act that should be included in the strategy, such as descriptions of the presence, activities, and operations of Iran and its proxy actors in the Western Hemisphere; a description of the federal law enforcement capability and military forces in the Western Hemisphere that may organize to counter the threat posed by Iran and its proxy actors; and a plan to address any efforts by foreign persons, entities, and governments in the region to assist Iran in evading United States and international sanctions. In June 2013 State submitted a seven-page classified strategy report, an unclassified annex that summarizes policy recommendations, and an Intelligence Community Assessment at a higher security classification level, to fulfill the requirement in the act. State’s seven-page classified strategy report is an overview of Iran’s activities in the Western Hemisphere, its relationships with countries in the area, and U.S. efforts to address any concerns. It includes a summary of diplomatic and economic ties with the Western Hemisphere countries, noting that the key to Iran’s activities in the region has been Venezuela. The strategy report notes that the economic relationship between Iran and the Western Hemisphere is limited, with only 0.2 percent of Latin American exports going to Iran. It also describes the effect of U.S. economic sanctions and diplomatic pressure, which it says have been successful in preventing further Iranian involvement in the Western Hemisphere. In addition, the strategy broadly describes Iranian activities in the Western Hemisphere. The strategy report also describes five areas of focus for continuing to address Iranian threats in the Western Hemisphere. The areas of focus are to (1) expand existing efforts to share intelligence and information; (2) identify, disrupt, and dismantle criminal networks to enhance border security and strengthen law enforcement; (3) continue to take actions on sanctions and implementation of the Iran Freedom and Counter- Proliferation Act of 2012; (4) improve rule of law capacity-building initiatives in the Western Hemisphere; and (5) continue diplomatic pressure, including at the multilateral level in the United Nations and the International Atomic Energy Agency. In addition to the seven-page strategy report, State submitted two annexes. Annex A is an unclassified summary of policy recommendations and addresses a requirement in the act to submit an unclassified summary of policy recommendations. Annex A defines the desired end state of U.S. efforts in this area to be a decrease in Iranian presence and influence in the Western Hemisphere. It makes the assumption that Iran will continue its outreach to the Western Hemisphere but also concludes that Iranian influence in the Western Hemisphere is waning. Annex B is an Intelligence Community Assessment that was developed by ODNI at the request of State. According to ODNI, the Intelligence Community Assessment includes, among other things, a discussion of Iran’s presence in the Western Hemisphere, funding of cultural and religious centers, military-to-military activities, economic engagements, trade relationships, and diplomatic relations. In accordance with the act, officials in State’s Bureau of Western Hemisphere Affairs developed the strategy based on consultations with officials representing DOD, DHS, DOJ, Treasury, ODNI, and USTR in headquarters and also conducted some outreach to overseas posts and partner governments. The Countering Iran in the Western Hemisphere Act of 2012 required the Secretary of State to consult with the heads of all appropriate U.S. departments and agencies, including the Secretaries of Defense, Homeland Security, and Treasury, and the Attorney General, the Director of National Intelligence, and the U.S. Trade Representative. Officials we interviewed at headquarters representing all of these agencies noted that State sought their input into the strategy and requested their review prior to issuing the strategy. State also consulted with components of the intelligence community. According to an official at the Bureau of Western Hemisphere Affairs who helped draft the strategy, State did not conduct a specific data call to all of the U.S. posts in the Western Hemisphere to seek the posts’ input into the strategy. Instead, the official said State alerted embassies through e- mails from State leadership informing them of the development of the strategy. In addition, State reviewed information in cable reports from posts in the Western Hemisphere. Embassy officials at the four posts we visited were generally aware of the U.S. strategy on addressing Iranian activities in the Western Hemisphere. However, most of the embassy officials we interviewed were not at the embassy when State was developing the strategy and did not know if their predecessors had contributed to the strategy’s development. State officials reported meeting with officials from foreign embassies in Washington, D.C., including the embassies of Argentina, Canada, Mexico, and Brazil. Foreign government officials we met with during fieldwork in Mexico and Colombia said that they had not provided input into the U.S. strategy. Figure 1 provides a timeline of State’s collaboration efforts. According to State officials, the strategy represents a consensus view of key agencies, including DOD, DHS, DOJ, and the Intelligence Community. Of note, while DOD as a whole joined in this consensus, one part of DOD—the Southern Command—disagreed with the strategy’s characterization of the Iranian threat in the hemisphere at the time the strategy was prepared. In addition to collaboration regarding its strategy, State also collaborates with other key agencies (DOD, DHS, DOJ, ODNI, and Treasury) in headquarters about issues related to Iranian activities in the Western Hemisphere through interagency working groups and informal mechanisms. Officials representing all four U.S. embassies we visited (Argentina, Brazil, Colombia, and Mexico) also reported effective formal and informal collaboration efforts were in place to share information that could include activities of Iran and its proxies; the following are examples. Country team meetings: All four embassies we visited hold weekly country team meetings in which agencies and sections share information. Working groups: The Law Enforcement Working Group is the main venue for coordinating efforts to monitor and address potential Iranian activity in all four embassies we visited. The law enforcement working groups in all four locations included, at a minimum, all the relevant law enforcement and intelligence agencies (DOD, DHS, DOJ, and others in the Intelligence Community) at the embassy. At some embassies, these meetings also included components not traditionally associated with law enforcement, such as State’s Political and Economic sections. Other working groups also played important roles in addressing threats, sometimes including those emanating from Iran and its proxies. Informal collaboration and communication: Officials representing all different sections and agencies of all four embassies also reported that informal communications (e-mails, phone calls, in-person visits) or ad hoc meetings are sometimes the most important means to collaborate effectively and efficiently as issues arise. The three documents constituting the strategy contain information on Iranian activities in the Western Hemisphere; however, they do not contain all of the information identified in the act. We identified 12 distinct elements that the act states should be included in the strategy. Half of these elements request a description of specific Iranian activities and relationships, as well as foreign and U.S. capabilities to counter the threat posed by Iran in the Western Hemisphere. The other half request plans to address potential threats to the United States. As shown in figure 2, the strategy fully addresses 2 elements, partially addresses 6 elements, and does not address the remaining 4 elements. For the 12 strategy elements shown in table 1: of the 6 that the act states should include a description of Iranian activities or Latin American government capabilities to address Iranian activities, the strategy fully addresses 2 and partially addresses 4; of the 6 that the act states should include plans to address potential threats to U.S. interests, the strategy partially addresses 2 and does not address 4. State and ODNI officials reported four reasons why the strategy does not fully address the information that the act stated should have been included. First, State officials informed us that they only included information on threats posed by Iran and Hizballah to the United States, based on State’s interpretation of the act. According to State officials, State interpreted the law to mean that if the Secretary of State deemed the Iranian or Hizballah activity a threat to the United States, State would be required to address it in its submission to the relevant congressional committees; if the Secretary of State did not deem it to be a threat to the United States, State would not be required to address it. ODNI officials also told us that they did not report on elements for which they had no information or for which the information available to them indicated there was no relevant Iranian activity. In the strategy, State and ODNI did not note elements for which they sought but did not find relevant information. Second, ODNI officials reported several reasons why the Intelligence Community Assessment only partially addressed some of the elements. The officials noted that Intelligence Community analysts regularly provide a range of products to policymakers on the topic, including more tactical information than is included in Intelligence Community Assessments, which have helped policymakers as they developed the broader strategy. Third, State officials informed us that State’s February 2010 Executive requires all reports to Congress to be Secretariat Memorandumlimited to five pages and that State therefore issued its classified strategy and unclassified summary of policy recommendations to meet this five-page reporting limitation. According to State officials, this requirement limited their ability to include information to comprehensively address all of the elements identified in the act. Fourth, ODNI officials informed us that the Intelligence Community Assessment did not respond to some of the elements identified in the law because these elements were related to policy matters and thus were not appropriate for the Intelligence Community Assessment to address. While State was not required to include GAO’s six desirable characteristics for national strategies in its strategy to address Iran’s activity in the Western Hemisphere, it included some but not all of these characteristics. These desirable characteristics are (1) purpose, scope, and methodology; (2) problem definition and risk assessment; (3) goals, subordinate objectives, activities, and performance measures; (4) resources, investments, and risk management; (5) organizational roles, responsibilities, and coordination; and (6) integration and implementation. Ideally, a national strategy should contain all of these characteristics. Including these characteristics in a national strategy also enhances its usefulness as guidance for resource and policy decision makers to better ensure accountability. As shown in table 2, we found that the strategy for addressing Iranian activities in the Western Hemisphere fully addresses one desirable characteristic of national strategies, partially addresses three more, and does not address the remaining two. The strategy fully addresses problem definition and risk assessment. According to ODNI officials, the Intelligence Community Assessment goes into significant detail describing Iranian activities and assessing the risks to U.S. interests. The strategy partially addresses three other desirable characteristics of national strategies: purpose, scope, and methodology; goals, subordinate objectives, activities, and performance measures; and organizational roles, responsibilities, and coordination. The classified strategy report discusses the purpose of the strategy as a response to the act and a multiagency effort to address Iranian activities in Latin America, and the strategy also briefly discusses the various agencies consulted in its development. However, the information contained in the strategy is too general to be considered to fully address its methodology. The strategy discusses an ideal “end-state,” major goals, subordinate objectives, and specific activities to address Iranian activities in the Western Hemisphere. However, it does not set clear desired results and priorities, specific milestones, and outcome-related performance measures. It also does not discuss any limitations on performance measures that may exist, nor does it address plans to obtain better data or measurements. The strategy identifies organizations involved with achieving the desirable result and mechanisms for coordinating their efforts to address Iranian activities in the Western Hemisphere, but it does not clarify organizations’ specific roles and responsibilities to address Iranian activities in the Western Hemisphere. The strategy does not address two desirable characteristics of national strategies: resources, investments, and risk management; and integration and implementation. When asked, State officials said that the strategy outlines ongoing initiatives and programs that address Iranian activities and does not require any additional investments or plans to implement these initiatives. The United States continues to face a range of threats to its national security, among them threats emanating overseas from terrorist organizations and their state sponsors—including Iran. Congress has expressed serious concerns about Iranian activities in the Western Hemisphere, including reported involvement in the attempt to assassinate the Saudi Ambassador to the United States. To more fully understand the nature and extent of Iranian activities in the Western Hemisphere, Congress required State to assess and report on the threat posed to the United States by Iran’s presence and activity in the Western Hemisphere, and to develop a strategy for addressing Iran’s hostile presence and activity. While State’s strategy report and the accompanying Intelligence Community Assessment include information about Iranian activities in the Western Hemisphere, some information that Congress stated should be included was either partially addressed or not addressed. State and ODNI officials provided to us reasons why they did not fully address some of the information Congress called for in the strategy. However, the strategy did not include State’s explanation, which may have contributed to some of the concerns expressed by Members of Congress. Providing additional information that addresses the topics not covered by the strategy— including the plans outlining interagency and multilateral coordination of targeted security efforts—could help Congress understand the basis for State’s conclusions and better inform policymakers as they continue to monitor the potential threats posed by Iranian activities in the Western Hemisphere. For elements identified in the Countering Iran in the Western Hemisphere Act of 2012 that were not fully addressed in the strategy, we recommend that the Secretary of State provide the relevant congressional committees with information that would fully address these elements. In the absence of such information, State should explain to the congressional committees why it was not included in the strategy. We provided a draft of this report to State, DOD, DHS, DOJ, ODNI, Treasury, and USTR for comment. DOD, DOJ, ODNI, Treasury, and USTR had no comments. DHS provided a technical comment, which we addressed as appropriate. In its written comments, reproduced in appendix II along with our responses to specific points, State generally disagreed with our assessment of the extent to which the strategy addressed the elements in the act. State indicated that it has provided information and briefed Congress on these matters on a regular basis and agreed to continue to do so. In support of its position, State noted that our report catalogued matters that Congress stated should be included in the strategy but that these were not specific reporting requirements. In addition, State explained that it did not address matters where the consensus of the intelligence community was that there was not an identifiable threat to counter. According to State, most of the elements we identified as not being adequately addressed in the strategy fell into this category. We acknowledge that State did not report on elements for which they had no information or for which available information indicated there was no relevant Iranian threat, and that providing all relevant existing guidance, plans, and initiatives in its strategy would have made the report longer than the five pages allowed under State’s guidance for reports to Congress. However, we maintain that the strategy does not include all of the elements that the law stated should be included. Specifically, it does not discuss nor provide Congress with an explanation for the exclusion of elements called for by the act for which State and ODNI did not find relevant threat information. It also does not include summaries of existing agency documents that State officials mentioned would address some elements in the act. Providing such information could have more fully informed Congress regarding State’s assessment of the threat posed by Iranian activities in the Western Hemisphere and U.S. government efforts to address the threat. We are sending copies of this report to the appropriate congressional committees, the Secretaries of State, Defense, Homeland Security, and Treasury; the Attorney General; the U.S. Trade Representative; and the Director of National Intelligence. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7331 or johnsoncm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. In December 2012, Congress enacted the Countering Iran in the Western Hemisphere Act of 2012, which, among other things, required the Department of State (State) to assess the threats posed to the United States by Iran’s activities in the Western Hemisphere and submit to the relevant congressional committees the results of the assessment and a strategy to address these threats. In this report, we examine (1) State’s collaboration with other key U.S. agencies and foreign partners to address Iranian activities in the Western Hemisphere, (2) the extent to which the strategy on addressing Iranian activities in the Western Hemisphere included elements identified in the act, and (3) the extent to which the strategy on addressing Iranian activities in the Western Hemisphere included desirable characteristics of national strategies. To analyze State’s collaboration with key U.S. agencies and foreign partners to address Iranian activities in the Western Hemisphere, we reviewed agency documents and interviewed U.S. and foreign officials. We interviewed officials from the Departments of State (State), Defense (DOD), Homeland Security (DHS), Justice (DOJ), and the Treasury (Treasury); the Office for the Director of National Intelligence (ODNI); and the Office of the U.S. Trade Representative (USTR). We also interviewed officials at the U.S. embassies in Argentina, Brazil, Colombia, and Mexico and host government officials in Colombia and Mexico regarding input they may have provided regarding the strategy. We chose these countries based on a number of factors including whether they had experienced instances of Iran-linked terrorist attacks, their bilateral relationship with the United States, and our ability to meet with host governments. The results of our interviews with officials at these four locations are not generalizable to all countries in the Western Hemisphere. To examine the extent to which the strategy to address Iranian activities in the Western Hemisphere included elements identified in the Countering Iran in the Western Hemisphere Act of 2012, we analyzed State’s submission of the strategy document, including the classified strategy report, the unclassified summary of policy recommendations, and the Intelligence Community Assessment. We identified 12 elements that Congress requested in the act, as the act had noted specific matters that should be included in the strategy. We analyzed documents and interviewed State and ODNI officials to determine how, if at all, the strategy addressed the elements in the act. To do so, two analysts conducted separate assessments of all three strategy documents against the 12 elements we identified in the act. They reached agreement on the extent to which the measures fully addressed, partially addressed, or did not address the attributes. A manager reviewed the analysis, and the three individuals reached a final consensus. A senior methodologist reviewed the analysis for completeness and balance. Coding worked as follows: the strategy “fully addresses” an element when it explicitly cites all characteristics of an element, even if it lacks further details. The strategy “partially addresses” an element when it explicitly cites some but not all characteristics of an element. Within our designation of “partially addresses,” there is a wide variation between addressing most of the characteristics of an element and addressing few of the characteristics of an element. The strategy “does not address” an element when it does not explicitly cite or discuss any characteristics of an element and/or when any implicit references are either too vague or general. For some instances in which we could not review portions of the documents that make up the strategy, we used the testimonial information provided by agency officials; we noted such instances. We asked to review the entire Intelligence Community Assessment but were unable to do so because of concerns over its security classification. We reviewed some excerpts and interviewed the ODNI officials who prepared the assessment regarding its contents. In three instances, ODNI officials told us information addressing an element included in the act was included in the Intelligence Community Assessment, but they did not provide supporting documentation. In those instances, we have reflected the information provided by ODNI officials but noted that we were not able to independently verify their statements because of a lack of documentation. We also examined the strategy to determine the extent to which the strategy incorporated desirable characteristics of national strategies previously identified by GAO. Similar to our analysis of the extent to which the strategy addressed elements identified in the act, we analyzed all three documents that make up the strategy and assessed how, if at all, the strategy addressed the elements in the act. Two analysts conducted separate assessments of all three strategy documents against the six desirable characteristics of national strategies. They reached agreement on the extent to which the measures fully addressed, partially addressed, or did not address the attributes. A manager reviewed the analysis and the three individuals came together to reach a final consensus. A senior methodologist reviewed the analysis for completeness and balance. Coding worked as follows: the strategy “fully addresses” a desirable characteristic of a national strategy when it explicitly cites all aspects of a characteristic, even if it lacks further details. The strategy “partially addresses” a desirable characteristic when it explicitly cites some but not all aspects of a characteristic. Within our designation of “partially addresses,” there is a wide variation between addressing most of the elements of a characteristic and addressing few of the elements of a characteristic. The strategy “does not address” an element when it does not explicitly cite or discuss any aspect of a characteristic, and/or when any implicit references are either too vague or general. As discussed above, for some instances in which we could not review portions of the documents that make up the strategy, we used the testimonial information provided by agency officials; we noted such instances. We conducted this performance audit from January 2014 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. 1. Our report notes that Congress requested that the strategy include “a plan to address resources, technology, and infrastructure to create a secure United States border.” Therefore, we evaluated the extent to which the information sought by Congress was included in the strategy. While other documents may contain the information sought by Congress, it was not included or summarized in State’s strategy report. 2. We agree with State that its report does not contain a plan, as sought by Congress, to address any efforts by foreign persons, entities, and governments in the region to assist Iran in evading U.S. and international sanctions. While State provided information regarding past activities to enforce sanctions, a plan was not included in the strategy. 3. While we have included some information on security initiatives and assistance programs in our report based on meetings with State officials, State’s original strategy report to Congress did not contain the plan sought by Congress. 4. As we discuss in our report, State and the Office of the Director of National Intelligence (ODNI) did not address elements for which State and ODNI sought but did not find relevant information. However, State’s original strategy report did not indicate that the plan sought by Congress was not provided because there was a lack of a clear threat. Explicitly indicating why the plan sought by Congress was not included could have helped better inform Congress. 5. ODNI—which was responsible for drafting the Intelligence Community Assessment referred to by State—informed us that the Intelligence Community Assessment included only some but not all of the various descriptions identified in the act. As such, our finding is based in part on the views of ODNI. 6. We maintain that the strategy only partially addresses 5(b)(6)(B) of the act, acknowledging it includes a reference to some citizen security initiatives. The act stated that the strategy should include a plan, but State did not provide the citizen security initiatives’ documents or a summary of its plan in the strategy. 7. Our report states that the language in the act requested “a plan to support United States efforts to designate persons and entities in the Western Hemisphere for proliferation activities and terrorist activities relating to Iran.” Therefore, we evaluated the extent to which the information sought by Congress was included in the strategy report submitted to Congress, not whether it was publicly available on State’s or Treasury’s web site. While other documents may contain the information sought by Congress, it was not included in the strategy. In addition to the contact named above, Jason Bair (Assistant Director), Victoria Lin (Analyst-in-Charge), Brian Hackney, Ashley Alley, David Dayton, and David Dornisch made key contributions to this report. Oziel Trevino and Sarah Veale provided technical assistance.
The activities of Iranian government elements, such as a 2011 attempt to assassinate the Saudi Ambassador in the United States, could pose a threat to U.S. national security. Congress enacted the Countering Iran in the Western Hemisphere Act of 2012, requiring State to assess the threats posed to the United States by Iran's presence and activity in the Western Hemisphere and to develop a strategy to address those threats. This report examines (1) State's collaboration with other key U.S. agencies and foreign partners to address Iranian activities in the Western Hemisphere, (2) the extent to which the strategy addresses elements identified in the act, and (3) the extent to which the strategy includes desirable characteristics of national strategies. GAO analyzed agency documents and interviewed agency officials in Washington, D.C.; Argentina; Brazil; Colombia; and Mexico. GAO chose these countries based on factors such as past instances of Iran-linked terrorist attacks and their bilateral relationships with the United States. The Department of State (State) uses a variety of mechanisms to collaborate with interagency partners and host governments to address activities of Iran in the Western Hemisphere. In developing the strategy, which includes an Intelligence Community Assessment developed by the Office of the Director of National Intelligence (ODNI), State's Bureau of Western Hemisphere Affairs worked with other U.S. agencies at the headquarters level and relied on cable reporting from posts. According to State officials, the strategy represents a consensus view of key agencies. While the Department of Defense (DOD) as a whole joined in this consensus, one part of DOD—the Southern Command—disagreed with the strategy's characterization of the Iranian threat at the time the strategy was prepared. State also uses venues such as country team meetings and law enforcement working groups to address Iranian activities. While the strategy contains information on Iranian activities in the Western Hemisphere, it does not contain all the information that the Countering Iran in the Western Hemisphere Act of 2012 stated it should include. GAO identified 12 distinct elements that the act stated should be included in the strategy. As shown in the figure, the strategy fully addresses 2, partially addresses 6, and does not address 4 of 12 elements. For example, the strategy contains information describing the operations of Iran, but does not include a plan to address U.S. interests to ensure energy supplies from the Western Hemisphere are free from foreign manipulation. State and ODNI officials reported several reasons why the strategy may not fully address the information identified in the law. For example, State said it only included information in the strategy if it deemed the activity identified in the law to be a threat to the United States. Note: ODNI officials did not provide documentation for three of the elements that were fully or partially addressed in the Intelligence Community Assessment. State is not legally required to address the six desirable characteristics of effective national strategies GAO has identified, but the strategy does include some of them. The strategy fully addresses problem definition and risk assessment. It partially addresses purpose, scope, and methodology; goals, subordinate objectives, activities, and performance measures; and organizational roles, responsibilities, and coordination. The strategy does not, however, address resources, investments, and risk management; and integration into other strategies and implementation by other levels of government. GAO recommends that the Secretary of State provide the relevant congressional committees with additional information that would fully address the elements in the act. In the absence of such information, State should explain why it was not included in the strategy. State generally disagreed with our assessment of the extent to which the strategy addressed the elements in the act but agreed to continue to provide Congress with information regarding Iranian activities in the Western Hemisphere.
Following the terrorist attacks of 2001, Congress and the executive branch took numerous actions aimed explicitly at establishing a range of new measures to strengthen the nation’s ability to identify, detect, and deter terrorism-related activities and protect national assets and infrastructure from attack. One theme common to nearly all these efforts was the need to share timely information on terrorism-related matters with a variety of agencies across all levels of government. The ability to share security-related information can unify the efforts of federal, state, and local government agencies in preventing or minimizing terrorist attacks. Section 1016 of the Intelligence Reform Act, as amended by the 9/11 Commission Act, required the President to take action to facilitate the sharing of terrorism-related information by creating an information sharing environment—what has become the ISE. Consistent with the Intelligence Reform Act, the Program Manager intends for the ISE to provide the means for sharing terrorism information in a manner that—to the greatest extent practicable—ensures a decentralized, distributed, and coordinated environment that builds upon existing systems and leverages ongoing efforts. Under the act, the President is to designate a Program Manager to, among other things, plan for, oversee implementation of, and manage the ISE. The act also established an Information Sharing Council to assist the President and the Program Manager in carrying out these duties. Furthermore, the act required the President, with the assistance of the Program Manager, to submit to Congress a report containing an implementation plan for the ISE not later than 1 year after the date of enactment (enacted December 17, 2004) and specified elements to be included in this plan. These elements include, among other things, a description of the function, capabilities, resources, and conceptual design of the ISE; budget estimates; metrics and performance measures; and delineation of ISE stakeholder roles. The act also required the submission of annual performance management reports, beginning not later than 2 years after enactment, and annually thereafter, on the state of the ISE and on information sharing across the federal government. In April 2005, the President designated a Program Manager responsible for information sharing across the federal government, in accordance with the Intelligence Reform Act. In December 2005, the President issued a memorandum to implement measures consistent with establishing and supporting the ISE. The memorandum set forth information sharing guidelines, such as defining common standards for how information is to be acquired, accessed, shared, and used within the ISE and standardizing the procedures for handling sensitive but unclassified information. The memorandum also directed the heads of executive departments and agencies to actively work to promote a culture of information sharing within their respective agencies and reiterated the need to leverage ongoing information sharing efforts in the development of the ISE. In November 2006, the Program Manager issued an ISE implementation plan to provide an initial structure and approach for ISE design and implementation. The plan incorporated the guidelines in the President’s December 2005 memorandum as well as elements spelled out in the Intelligence Reform Act. For example, the plan included steps toward developing standardized procedures for handling sensitive but unclassified information as well as protecting information privacy, as called for in the President’s information sharing guidelines. Under the plan, the ISE would consist of five “communities of interest”—homeland security, law enforcement, foreign affairs, defense, and intelligence. In addition, in August 2007, the Program Manager issued the initial version of an EAF, which is intended to support ISE implementation efforts. In October 2007, the President issued the National Strategy for Information Sharing. The strategy focuses on improving the sharing of homeland security, terrorism, and law enforcement information related to terrorism within and among all levels of government and the private sector. The strategy notes that the ISE is intended to enable trusted partnerships among all levels of government in order to more effectively detect, prevent, disrupt, preempt, and mitigate the effects of terrorism against the United States. Further, according to the strategy, these partnerships should enable the trusted, secure, and appropriate exchange of terrorism-related information across the federal government; to and from state, local, and tribal governments, foreign allies, and the private sector; and at all levels of security classifications. The strategy reaffirmed that stakeholders at all levels of government, the private sector, and foreign allies play a role in the ISE. The strategy also outlined some responsibilities for ISE stakeholders at the state, local, and tribal government levels. In July 2009, the administration established the Information Sharing and Access Interagency Policy Committee (ISA IPC) within the Executive Office of the President to, among other things, identify information sharing priorities going forward. The committee—with representation of participating ISE agencies and communities—is intended to provide oversight and guidance to the ISE. In June 2010, the President appointed the current Program Manager and the White House designated the White House Senior Director for Information Sharing Policy and the Program Manager co-chairs of the ISA IPC. The ISA IPC is responsible for advising the President and Program Manager in developing policies, procedures, guidelines, roles, and standards necessary to establish, implement, and maintain the ISE. Also, pursuant to the Intelligence Reform Act, the head of each department or agency that participates in the ISE is required to ensure compliance with information sharing policies, procedures, guidelines, rules, and standards. Further, OMB provides budgetary, programmatic, and architecture policy guidance to ISE agencies; prepares the President’s budget; and measures performance. The ISE is not a traditional, dedicated information system, according to the Program Manager. Rather, it is an interrelated set of policies, processes, and systems intended to allow ISE agencies to access and share information in a decentralized, distributed, and coordinated environment that builds upon existing systems and leverages ongoing efforts. The Program Manager also noted that the ISE is not a program in the traditional sense with a finite set of requirements, deliverables, and milestones and an agreed-to budget and manpower resources. Nevertheless, it is an effort that receives government funding and can be reviewed using program and project management principles. In June 2008, we reported that the Program Manager and stakeholder agencies had completed a number of tasks outlined in the 2006 implementation plan, including, among other things, the development of proposed common terrorism information sharing standards—a set of standard operating procedures intended to govern how information is to be acquired, accessed, shared, and used within the ISE—and the development of procedures and markings for sensitive but unclassified information to facilitate the exchange of information among ISE participants. Departments and agencies are in the process of determining how they will implement this guidance (once implemented, this effort could help improve access to information and therefore improve information sharing). Nevertheless, we reported that the action items in the Program Manager’s June 2006 implementation plan did not address all of the activities that must be completed to implement the ISE. For example, we noted that work remained in defining the ISE’s scope and in determining all terrorism-related information that should be part of the ISE. Moreover, we found that the desired results to be achieved by the ISE—that is, how information sharing is to be improved, the individual projects and initiatives to achieve these results, and specific milestones—had not yet been determined. Thus, as previously discussed, we recommended, among other things, that the Program Manager more fully define the scope and specific results to be achieved by the ISE along with the key milestones and individual projects or initiatives needed to achieve these results. The Program Manager and agencies have taken some steps to address this recommendation but have not yet fully addressed it, as we discuss later in this report. The sharing of terrorism-related information remains on our high-risk list. Our work in this area has consistently focused on how well the federal government is sharing information among federal agencies as well as with state, local, tribal, private sector, and international partners. As such, our focus has been on progress the federal government has made in standing up the ISE. In February 2011, we reported that while the federal government has continued to make progress in sharing terrorism-related information among its many partners, it does not yet have a fully functioning ISE in place. Since we issued our 2008 report, the Program Manager and agencies have established a discrete set of goals and undertaken activities to guide development and implementation of the ISE, but these actions do not fully address our recommendations or provide the comprehensive road map that we called for in our report. For example, the Program Manager and agencies have not yet fully defined what the ISE is expected to achieve and contain, identified the incremental costs necessary to implement the ISE, or fully developed procedures to show what work remains and related milestones to provide accountability for results. The administration has taken steps to strengthen the ISE governance structure to help guide the development and implementation of the ISE, but it is too early to gauge the structure’s effectiveness. In November 2006 and in accordance with the Intelligence Reform Act, the Program Manager submitted an ISE implementation plan to Congress that, according to the plan, was intended to help guide development of the ISE for a 3-year period. The plan addressed initial actions for defining the ISE as well as agency responsibilities and time frames. However, as we discussed in our 2008 report, the plan did not include some important elements needed to develop and implement the ISE. Work remained in, among other things, defining and communicating the scope and desired results to be achieved by the ISE, specific milestones to achieve the results, and the individual projects and execution sequence needed to achieve these results and implement the ISE. Subsequently, in part based on recommendations made in our 2008 report, the Program Manager worked with the five key agencies to create a new plan to guide development of the ISE, which they called an ISE “framework.” Specifically, the framework identified four goals for the ISE, which were to (1) create a culture of sharing; (2) reduce barriers to sharing; (3) improve information sharing practices with federal, state, local, tribal, and foreign partners; and (4) institutionalize sharing. The framework also identified 14 specific subgoals or activities agencies were to pursue. Some of these activities were intended to institutionalize information sharing practices into agency operations, such as establishing information sharing and incentive programs for federal employees. For example, DHS, DOJ, and DOD, as well as ODNI have made information sharing a factor in their incentives programs by offering employees awards based on their contributions to information sharing and collaboration practices. The framework also cataloged agencies’ ongoing information sharing initiatives to leverage their benefits across the government, consistent with the Intelligence Reform Act, including the following:  The Nationwide Suspicious Activity Reporting Initiative. This initiative builds on what state and local law enforcement and other agencies have been doing for years—gathering information regarding behaviors and incidents indicative of criminal activity that may be precursors to terrorism—and establishes a standardized process to share this information among agencies to help detect and prevent terrorism-related activity. In February 2010, DOJ became the lead agency for the initiative and established a program management office to support its development in cooperation and coordination with DHS and the Federal Bureau of Investigation.  The national network of fusion centers. This initiative is designed to leverage the fusion centers that all 50 states and some major urban areas have established to address gaps in terrorism-related information sharing that the federal government cannot address alone and provide a conduit for information sharing within each state, among other things. In 2010, federal, state, and local officials from across the country launched the first nationwide assessment of fusion center capabilities, with the goal of helping centers close gaps so they have a consistent baseline of information sharing capabilities. Information from this assessment is to be used to develop strategies and realign resources to close those gaps going forward. ISE privacy and civil liberties. ISE stakeholders have made an effort to strengthen privacy, civil rights, and civil liberties across all sectors of the ISE. According to the July 2010 annual report, 9 of 15 ISE stakeholders had implemented ISE privacy policies. These policies are intended to ensure that privacy and other legal rights of Americans are protected in the development and use of the ISE. For the subgoals in the framework, the Program Manager established a process to gauge and track agencies’ progress in implementing these subgoals and a related set of performance measures. The Program Manager included the framework in both the June 2009 and July 2010 annual progress reports to Congress. As discussed later in this report, the framework and annual reports to Congress did not specifically address what work remained in completing the initiatives or related milestones. The ISE framework has served as a plan to guide development of the ISE and its discrete set of 14 subgoals. The framework includes a number of elements that our work has shown are important for developing and implementing broad, crosscutting initiatives like the ISE, such as defined goals, objectives, activities, and metrics. However, as discussed in more detail later in this report—in part because the framework is limited to these 14 subgoals and does not define what the fully functioning ISE is to achieve and include—it does not provide the comprehensive road map that is needed to further develop and implement the ISE going forward. In April 2010, the White House Senior Director for Information Sharing Policy acknowledged that the ISE framework is a set of 14 disparate activities that do not constitute a governmentwide initiative to share terrorism information, as envisioned by the Intelligence Reform Act. According to the Program Manager, the role of the current framework in guiding further development of the ISE and the extent to which other activities will be integrated into the framework have not yet been determined. Therefore, it is unclear how, if at all, the framework and its related goals and activities will be used to guide future development of the ISE. More than 6 years after enactment of the Intelligence Reform Act and initial efforts to create the ISE, there is not a clear definition of what the ISE is intended to achieve and include. The Program Manager and ISE agencies have ongoing efforts to more fully define this “end state” vision, which is a key next step for ISE development, by the end of summer 2011. After this vision is defined, it will be important for the Program Manager and ISE agencies to ensure that all relevant agency initiatives are leveraged by the ISE to improve information sharing across all communities and to define the incremental costs related to implementing the ISE so agencies can determine how to fund future investments. The Program Manager has enhanced monitoring of ISE initiatives, but additional actions could help demonstrate progress and provide accountability for results. In addition to Intelligence Reform Act requirements, our prior work has found that these activities help to provide a road map for responsible parties in developing and implementing broad, crosscutting initiatives like the ISE. Such actions are also consistent with criteria we use to assess whether agencies have made progress to resolve past terrorism-related information sharing problems, thereby reducing the risk that these problems pose to homeland security. A road map for the ISE should identify key next steps for ISE development and start with a clear definition of what the ISE is intended to achieve and include—or the “end state” vision. In 2008, we reported that while the Program Manager had completed a plan with an initial structure and approach for ISE design and implementation, he had not yet determined the desired results to be achieved by the ISE, and we recommended that he do so, among other things. The Program Manager has also acknowledged the importance of developing an end state vision for the ISE and noted that he is doing so as part of efforts to update the 2007 National Strategy for Information Sharing. The Program Manager said that this update will drive future ISE implementation efforts and will help individual agencies across all five communities adapt their information sharing policies, related business processes, architectures, standards, and systems to effectively operate with the ISE. According to the Program Manager, the end state vision will define the current state of the ISE and the future vision to be achieved by agencies as they work to further develop and implement the ISE. DHS and DOJ officials we contacted also cited the importance of developing an end state vision to assist in guiding development and implementation of the ISE. For example, DOJ officials stated that a defined end state would facilitate development and implementation of common goals going forward. The Program Manager has publicly acknowledged the need to accelerate ISE progress. To inform efforts to define an end state vision, the Program Manager has been soliciting ideas and input from ISE stakeholder agencies. According to the Program Manager, the updated National Strategy for Information Sharing and the ISE end state vision have not been finalized, and therefore it is premature to speculate on questions such as changes in program or investment priorities as well as information sharing gaps and challenges to be addressed. In June 2011, the Program Manager said that the national strategy will be updated in the near future, but he did not provide a specific date. According to the Program Manager, the end state vision will be a snapshot at a point in time because as threats continue to evolve, the ISE will need to evolve as well. The Program Manager noted that after development of the end state vision is completed, supporting implementation plans will be needed to help guide achievement of the vision, including plans that define what activities and initiatives will be needed to achieve the end state and to guide development and implementation of the ISE. Such plans would be consistent with our call for a road map, if they contain key ingredients such as roles, responsibilities, and time frames for these activities, among other things. Further, as we discuss later in this report, the process of defining an EA for the ISE—and agencies’ associated segment architectures that support their individual ISE activities—could help the Program Manager and agencies in their efforts to define the current operational and technological capabilities within the ISE, the future capabilities needed, and a plan to transition between the two. The September 11, 2001, terrorist attacks exposed that the five ISE communities—homeland security, law enforcement, foreign affairs, defense, and intelligence—were insulated from one another, which resulted in gaps in the sharing of information across all levels of government. Before the attacks, the overall management of information sharing activities among government agencies and between the public and private sectors lacked priority, proper organization, coordination, and facilitation. Consistent with the Intelligence Reform Act, the ISE is intended to provide the means for sharing terrorism information across the five communities in a manner that, among other things, builds upon existing systems and leverages ongoing efforts. To date, the ISE has primarily focused on the homeland security and law enforcement communities and related sharing between the federal government and state and local partners, in part to align with information sharing priorities. OMB ISE programmatic guidance shows that ISE activities have been primarily focused on sharing within the homeland security and law enforcement communities and with domestic partners—such as state and local law enforcement agencies. This guidance—developed in collaboration with ISE leadership—outlines the White House’s priorities for the ISE and those that agencies are to focus on and align resources and investments to during a given fiscal year. For fiscal year 2012, OMB’s programmatic guidance identifies the following priorities, which are primarily focused on sharing information between the federal government and state and local partners:  building a national integrated network of fusion centers,  continuing implementation of the Nationwide Suspicious Activity Reporting Initiative,  establishing Sensitive but Unclassified/Controlled Unclassified Information network interoperability, improving governance of the Classified National Security Information Program, and  advancing the implementation of controlled unclassified information policy. Officials from all five communities generally agreed that ISE activities undertaken to date have been primarily focused on sharing within the homeland security and law enforcement communities—primarily domestic sharing between the federal government and state, local, and tribal partners. According to DOJ officials, this initial focus was appropriate and allowed the Program Manager to leverage agencies’ ongoing efforts to share terrorism-related information. The officials noted that by focusing on a select set of initiatives—such as the Nationwide Suspicious Activity Reporting Initiative and the national network of fusion centers—the Program Manager was able to make progress toward implementing ISE priorities. We recognize that recent homeland security incidents and the changing nature of domestic threats make continued progress in improving sharing between federal, state, and local partners critical. However, consistent with the Intelligence Reform Act, the ISE is intended to provide the means for sharing terrorism information across all five communities. The Program Manager and ISE agencies have not yet ensured that initiatives within the foreign affairs, defense, and intelligence communities have been fully leveraged by the ISE to enhance information sharing within and across all communities. According to State officials, the department shares terrorism-related information with other agencies through a variety of efforts and initiatives related to national and homeland security. The officials noted that most of the initiatives are non- ISE efforts, meaning that they did not originate in the Program Manager’s office. The officials also noted that the department has only been asked to provide one kind of terrorism-related information as part of one ISE initiative related to Suspicious Activity Reporting and complied with this request. According to the Program Manager, State also possesses information about entrants to the country that could be valuable to the ISE. However, in April 2011, State officials said that the Office of the Program Manager has not contacted the department’s coordinator for the ISE to request information on programs or initiatives related to people entering the country. Therefore, the Program Manager and State have not determined if this information could be used to benefit other ISE communities. DOD officials also said that the department is undertaking activities outside of the ISE, such as efforts to develop interagency agreements between DOD and the Federal Bureau of Investigation for the purpose of sharing terrorism-related information. According to DOD officials, this effort could be part of the ISE if the information addressed within these agreements is consistent with the ISE’s established standards, among other things. In addition, the December 25, 2009, attempted terrorist attack highlighted the importance of effective information sharing within the intelligence community and demonstrated the potential consequences if information is not shared in a manner that facilitates its use in analysis, investigations, and operations. The intelligence community’s efforts to better share classified information among intelligence agencies are highlighted in the 2010 annual report, but the report does not discuss the extent to which these initiatives are being coordinated within and among the five communities or how the ISE could leverage their benefits. For example, the report discusses an initiative that will allow intelligence community personnel to search for or discover information, including terrorism- related information, across all agencies within the community. According to the Program Manager, this ODNI initiative—while so far limited to the intelligence community—should be highlighted as a best practice across the ISE. However, the 2010 report does not discuss whether and how these technological advances could be used to benefit other communities or how they are implementing this best practice. Also, according to the Program Manager, the ISE has generally left the sharing of Top Secret and higher information to ODNI and intelligence community agencies since they manage most of this information. He said that this was unlikely to change significantly in the future. Ensuring that the intelligence community is fully involved in developing the ISE could help resolve the problems the September 11 attacks exposed—especially that critical information was contained in agencies’ individual stovepipes and not shared. Further, in part because of the focus on domestic sharing with the homeland security and law enforcement communities, not all agencies have been similarly engaged in building the ISE or have had their initiatives leveraged as discussed above. Officials from the five key agencies said that they have actively participated in ISA IPC meetings and have had opportunities to provide feedback on emerging policy decisions. They also noted that when appropriate, they participate in the development and implementation of OMB priorities and initiatives set forth by the ISA IPC and Program Manager. However, State, DOD, and ODNI officials also reported that development of the ISE has had limited focus to date on information sharing within and among the foreign affairs, defense, and intelligence communities. State officials said that the ISE priorities established to date generally do not engage State’s mission because the initiatives are primarily focused on sharing with state and local partners, while State’s mission focuses on building relationships within the foreign affairs community. Similarly, DOD officials said they have been engaged in some ISE priorities—such as implementing the Nationwide Suspicious Activity Reporting Initiative—but that DOD has not been tasked to lead any new terrorism-related information sharing initiatives. In addition, ODNI officials said that because many ISE activities are focused on efforts with state, local, tribal, and private sector partners, the intelligence community’s participation in those activities is limited as the intelligence community, by mission and statute, primarily focuses on foreign intelligence. The Program Manager acknowledged that the most visible outcomes of the ISE have been in the law enforcement and homeland security communities. However, he noted that officials from the Office of the Program Manager have worked with State to standardize terrorism- related information sharing agreements with foreign governments; worked with DOJ and DOD to develop information technology standards that allow different agencies to exchange information; and worked with ODNI and the intelligence community to develop terrorism-related information products for state, local, and tribal governments. The Program Manager also noted that State, DOD, and ODNI are participants in the ISA IPC and have been afforded opportunities to help set ISE programmatic priorities and participate in discussions and decisions about where to strategically prioritize scarce resources. Nevertheless, the Program Manager has also recognized the need to enhance and extend partnerships across all five communities and said that significant outreach to ISE agencies has been under way since he became Program Manager in July 2010. In addition to his outreach efforts, the Program Manager has suggested that specific agencies—such as State, DOD, and ODNI—could also develop proposals for how their information sharing activities could be better integrated into the ISE. Consistent with the Intelligence Reform Act, the ISE is intended to provide the means for sharing terrorism information across all five communities in a manner that builds upon existing systems and leverages ongoing efforts. After the end state vision is defined, taking actions to ensure that all relevant information sharing initiatives across the five communities are fully leveraged could help the Program Manager and ISE agencies enhance information sharing governmentwide and better enable the federal government to share information that could deter or prevent potential terrorist attacks. Section 1016 of the Intelligence Reform Act required the President, with the assistance of the Program Manager, to include as part of the ISE’s implementation plan, a budget estimate that identified the incremental costs associated with designing, testing, integrating, deploying, and operating the ISE. In June 2008, we reported that the initial ISE Implementation Plan issued in 2006 did not provide a budget estimate that identified incremental costs in accordance with the act, but that the Program Manager indicated that steps to develop such an estimate would be taken in the future. At that time, a budget estimate that identified incremental costs had not been developed, in part, because the ISE was in such an early stage of development and it would have been difficult for agencies to know what to include in developing such a cost estimate. The Program Manager, in the 2009 ISE annual progress report, also identified the need to coordinate investments for terrorism-related initiatives as both a priority and a challenge, but noted that limited progress had been made in defining the resources needed to implement the ISE. The 2010 annual progress report noted that the Office of the Program Manager had developed a process that is intended to link ISE initiatives and performance measures to investment decisions. However, the Program Manager could not identify the level of investments that have been dedicated to the ISE to date. The Program Manager also could not identify the future incremental investments needed to develop and implement the ISE, in part because the Program Manager and key agencies had not yet determined what the ISE is to achieve and include. Officials from the Office of the Program Manager said they had not prepared estimated costs for the ISE and that there has never been a stand-alone budget for the program. The officials said that because the ultimate goal of the ISE is to become an institutionalized practice among agencies, to separate or designate funding for ISE-related activities as part of agency budget processes would undermine this overarching goal. Further, OMB officials said that because information sharing is a core mission of all departments and agencies, they are to cover costs to implement information sharing initiatives from within their existing budgets. Nevertheless, while an estimate has not been prepared, the Program Manager said that progress has been made in collecting certain ISE-related costs. Specifically, OMB, in cooperation with the Office of the Program Manager, modified OMB Circular A-11 in 2010 to collect more information from agencies about planned ISE-related technology investments. This effort is intended to identify costs related to agencies’ information technology system investments, but it does not identify other types of incremental costs associated with implementing the ISE, such as those involving training and other administrative programs and activities. The Deputy Program Manager acknowledged the importance of identifying such incremental costs but noted that ISE agencies are best positioned to establish this cost and budget information. Two of five agencies that we contacted noted that governmentwide initiatives, such as the ISE, are often difficult to implement without dedicated funding for mandated programs. For example, State officials noted that the department had challenges redirecting operational funds to achieve ISE program objectives during fiscal years 2008 and 2009. DOJ officials also acknowledged the challenges in implementing new governmentwide efforts without related funding, but noted that the use of “seed funding” in support of key terrorism-related information sharing initiatives—such as the Nationwide Suspicious Activity Reporting Initiative and fusion center programs—has been one of the major successes of the ISE. We recognize that attaining accurate and reliable incremental cost estimates for the ISE is a difficult undertaking, complicated further by the fact that the Program Manager and agencies are still defining what the ISE is, is to include, and is to attain. However, new ISE requirements will need additional investments, regardless of whether they are funded through existing agency budgets, a separate program budget, or another mechanism. Our best practices on cost estimation note that the ability of agencies to generate reliable cost estimates is a critical function for effective program management. In addition, our prior work shows that cost information can help agencies allocate resources and investments according to priorities and constraints, track costs and performance, and shift such investments and resources as appropriate. After the ISE end state vision is defined and needed activities and initiatives are identified, developing incremental cost estimates would help agencies plan and budget for these activities and initiatives and allow Congress and other decision makers to prioritize future investments and demonstrate a continued commitment to supporting the ISE. The Intelligence Reform Act requires the Program Manager to, among other things, monitor implementation of the ISE by federal departments and agencies to ensure that adequate progress is being made and regularly report the findings to Congress. In June 2008, we reported that the Office of the Program Manager was monitoring ISE implementation— as demonstrated through its September 2007 annual report to Congress—but that such monitoring did not include an overall assessment of progress in implementing the ISE and how much work remained. Thus, we recommended, among other things, that the Program Manager (1) develop a way to measure and demonstrate results to ensure that the ISE was on a measurable track to success and to show the extent to which the ISE had been implemented and what work remained and (2) more fully define the key milestones needed to achieve ISE results. The Program Manager generally agreed and has taken some steps to address these recommendations but has not yet fully addressed them. These practices are critical to an effective monitoring system and would help to provide an accurate accounting for progress to Congress and other stakeholders. Further, our prior work on high-risk issues shows that agencies must have a way to monitor and demonstrate progress against baseline requirements—in this case, the activities, milestones, and results to be achieved for the ISE. The Program Manager has taken steps to address our recommendations by instituting a “maturity model” to monitor and track progress. For example, the maturity model tracks each of the 14 initiatives in the ISE framework from their early stages of development until they are considered to be institutionalized into agency operations. The model contains four levels:  Ad-hoc: Information sharing occurs among functions or groups with few repeatable processes.  Defined: Information sharing sources and products are identified and processes are followed.  Managed: Information sharing is well characterized and consistently performed across organizational boundaries. Institutionalized: Information sharing is quantitatively managed and business processes are aligned, seeking continuous improvement. In the July 2010 annual report to Congress, the Program Manager noted that 9 of the 14 initiatives were at the second level and had been “defined,” and the remaining 5 were at level three and being “managed.” The maturity model and related reporting provide useful information on the status of ISE initiatives and provide a general indicator of the overall progress of the ISE. Nevertheless, these actions do not fully address our recommendations because the annual reports do not specifically address what work remains in completing the 14 initiatives or related milestones for completion, which are important elements in determining overall progress in implementing the ISE and establishing accountability for future efforts. The Program Manager’s ongoing efforts to define the ISE end state vision and implementing road map—to the extent that they include associated time frames and milestones for achieving both individual projects or activities as we recommended in June 2008 as well as the capabilities of a fully implemented ISE as envisioned—would help to provide a baseline for decision makers and investors to measure ISE progress. This baseline could be used to determine what work has been achieved and remains and whether additional efforts to accelerate progress are needed, among other things. While the framework did not establish time frames or milestones, the Office of the Program Manager uses an annual performance questionnaire to collect information on the agencies’ progress in implementing 10 of the 14 initiatives to inform the maturity model. According to officials from the Office of the Program Manager, the survey does not include data on the other 4 initiatives—the Nationwide Suspicious Activity Reporting Initiative, fusion centers, efforts to standardize controlled unclassified information, and the Interagency Threat Assessment and Coordination Group. Instead, the officials said that each of the agencies with responsibility for leading these efforts monitors its own performance to ensure progress and provides a summary of progress highlights to the Office of the Program Manager, which is incorporated into the annual report. For example, the 2010 annual report highlighted the successful integration of a Federal Bureau of Investigation system into the Nationwide Suspicious Activity Reporting Initiative. These summaries provide information that shows what agencies are doing and demonstrate recent accomplishments, but they do not provide a gauge to measure progress achieved versus what work remains or milestones for completing remaining work regarding fully developing and implementing the ISE. In January 2011, the ISA IPC and the Office of the Program Manager initiated an effort to make ISE priority programs and related goals more transparent and to better monitor progress. Specifically, according to the Deputy Program Manager, agencies that are responsible for implementing ISE priority programs are leading efforts to establish 3-, 6-, and 12-month goals for these programs. He noted that once the goal- setting process is established, information on progress made in reaching these goals may be included in future ISE annual reports. This process should help to provide accountability over ISE priority programs on a yearly basis. The 2008, 2009, and 2010 annual reports to Congress include some performance measures, such as the number of departments and agencies that have conducted ISE-related awareness training or have developed and implemented ISE privacy policies. Including these measures in annual reports is an important step in providing accountability for results, but it does not fully address our recommendation because the measures generally focus on counting activities (i.e., output measures) accomplished rather than results achieved (i.e., outcome measures), such as how and to what extent sharing has been improved and ultimately, to the extent possible, what difference these improvements are making in helping to prevent terrorist attacks. The Deputy Program Manager stated that the Office of the Program Manager recognizes the need to develop performance measures that show how and to what extent sharing has been improved and that the goal-setting process should assist in transitioning from output to outcome-oriented performance measures. We recognize and have reported that it is difficult to develop performance measures that show how certain information sharing efforts have affected homeland security. Nevertheless, we have recommended that agencies take steps toward establishing such measures to hold them accountable for the investments they make. We also recognize that agencies may need to evolve from relatively easier output measures—that for example count the number of agencies that have conducted ISE-related awareness training—to more meaningful measures that weigh agencies’ satisfaction with the timeliness, usefulness, and accuracy of information shared until the agencies can establish outcome measures that determine what difference the information made to federal, state, local, and other homeland security efforts. Thus, we continue to believe that our June 2008 recommendation to the Program Manager and key agencies to develop performance measures that show the extent to which the ISE has been implemented and sharing improved has merit and should be fully implemented. Our prior work on high-risk issues shows that a strong commitment from top leadership to addressing problems and barriers to sharing terrorism- related information is important to reducing related risks. In July 2009, the White House established the ISA IPC within the Executive Office of the President to subsume the role of its predecessor interagency body—the Information Sharing Council. The Assistant to the President for Homeland Security and Counterterrorism designated the White House Senior Director for Information Sharing Policy to chair the new committee. These changes were intended to bring high-level policy decision making and oversight to the development of the ISE. The Intelligence Reform Act requires the Program Manager to plan for, manage, and oversee implementation of the ISE, including assisting in the development of policies to guide implementation and ensure progress. In a July 2009 testimony, the Program Manager at that time cited concerns about the Program Manager’s authority and provided recommendations intended to help strengthen the ISE effort. For example, among other things, he recommended having a presidential appointee serve as Program Manager and having the Program Manager co-chair the ISA IPC. Following this Program Manager’s resignation, an acting Program Manager assumed responsibility for implementing the ISE until June 2010, at which time the President appointed the current Program Manager. Also, in June 2010, the Assistant to the President for Homeland Security and Counterterrorism designated the Program Manager as a co-chair of the ISA IPC—along with the White House Senior Director for Information Sharing Policy—which was consistent with the prior Program Manager’s recommendations. According to the Office of the Program Manager, having the Program Manager for the ISE also co-chair the ISA IPC was intended to acknowledge that policies, business practices, architectures, standards, and systems developed for the ISE can be applicable to other types of national security information beyond terrorism and vice versa. In this role, the Program Manager is to ensure the close alignment of the ISE and broader national security information sharing activities. The new Program Manager stated that he would have one of four levels of involvement in implementing the specific activities listed in the 2010 annual progress report to Congress:  Monitoring: For certain information sharing activities that agencies are generally implementing on their own initiative, the Office of the Program Manager is to stay informed of ongoing developments to determine whether the activity might be a potential best practice that is applicable to other ISE mission partners. The Program Manager also monitors activities to stay abreast of issues that might eventually surface through the ISE process. For example, the Program Manager said he monitors the intelligence community’s efforts to better share classified information among intelligence agencies.  Advising: For some agency initiatives, the Program Manager said that the Office of the Program Manager may be called on to provide specialized information sharing expertise, even though the office is not responsible for actual implementation. For example, the Program Manager said his office has an advisory role in supporting the Nationwide Suspicious Activity Reporting Initiative.  Supporting: For selected activities with significant implications for the ISE, the Program Manager said that the Office of the Program Manager is to play a more active support role, that this support could take many forms, and that it may include co-investment of seed capital in the early stages of specific high-priority efforts. For example, the Program Manager said the office supports agencies’ efforts to designate and share controlled unclassified information.  Leading: The Program Manager also said there are several activities for the ISE as a whole where the Office of the Program Manager is to take the lead role, providing the financial and personnel resources necessary to carry them out. For example, the Program Manager said the office has the lead role in providing communications and outreach related to the ISE. The Program Manager also noted that his role could evolve as activities mature, as it did for the Nationwide Suspicious Activity Reporting Initiative. The administration’s steps to strengthen the ISE governance structure address concerns the prior Program Manager identified and our criteria for committed leadership. However, it is too early to tell how the new structure will affect the continued development and implementation of the ISE and if the Program Manager’s new role will provide him sufficient leverage and authority to ensure that agencies consistently implement information sharing improvements governmentwide. The Program Manager’s 2010 annual report to Congress states that the office’s architecture program for the ISE describes the rules and practices needed for planning and operating ISE systems consistent with EA best practices. According to relevant guidance, an EA, or modernization blueprint, should include descriptions (i.e., “architecture views”) of an enterprise’s current and future environment for business processes, data and information, applications and services, technology, and security in meaningful models, diagrams, and narrative. In addition, our Enterprise Architecture Management Maturity Framework (EAMMF) recognizes that various approaches for structuring an EA exist and can be applied to the extent that they are relevant and appropriate for a given enterprise. These approaches generally provide for breaking down an enterprise into its logical parts and allowing various components of an enterprise (e.g., ISE mission partners) to develop their respective parts of the EA in relation to enterprisewide needs and the inherent relationships and dependencies that exist among the parts. Accordingly, our EAMMF provides flexibility for how such an EA should be developed and does not prescribe a specific approach by which organizations should develop EA content. In addition to providing descriptions of an enterprise’s current and future environment, relevant guidance states that an EA should include an enterprise sequencing plan for transitioning from the current environment to the future environment. Specifically, the enterprise sequencing plan should describe an incremental strategy that includes scheduling multiple, concurrent, interdependent activities and incremental implementation to evolve the enterprise. We have previously reported that successfully managing the development and implementation of an EA depends in large part on the extent to which effective management controls (e.g., policies, structures, processes, and practices) are employed. Our EAMMF provides a benchmark against which to measure the extent to which a given enterprise is effectively managing its architecture program. It defines various stages of maturity for an EA and the management controls expected to be in place for each stage. Stages 1 and 2 of this framework can be viewed as providing for the institutional leadership and foundational management capabilities for the later stages to build upon and thereby achieve program success. For example, in stage 1 an enterprise commits to developing an EA and defines the purpose of its EA, and in stage 2 it defines the methodology and plans by which EA products are to be developed and maintained. An EA program that has not satisfied key stage 1 and 2 core elements can be considered ad hoc, unstructured, and unlikely to succeed. It is important to note that the EAMMF should not be viewed as either a rigidly applied checklist or as the only relevant benchmark for managing and assessing an EA program. Instead, it is intended to be applied flexibly with discretion in light of each enterprise’s unique facts and circumstances. The Program Manager has developed architecture guidance to assist in the implementation of the ISE. For example, in August 2007, Version 1.0 of the ISE EAF was released and in September 2008 it was revised. The framework is to provide strategic guidance to enable long-term business and technology standardization and information systems planning, investing, and integration in the ISE by documenting and organizing the ISE mission business goals and processes, services, data, and technologies and other operational capabilities necessary to facilitate information sharing. In addition, in May 2008 the Office of the Program Manager issued its Profile and Architecture Implementation Strategy (PAIS) to augment its ISE EAF and in June 2009 it was revised. Among other things, the PAIS describes a series of steps that the ISE agencies are to follow when developing their information sharing segment architectures to support the implementation of ISE capability. These steps are generally consistent with federal guidance, such as the federal Chief Information Officers Council’s Federal Segment Architecture Methodology. The Program Manager and ISE agencies have also begun to develop products that describe several components of an ISE EA. For example, the Program Manager has worked with ISE agencies to establish cross- agency ISE segment architectures, such as the ISE Suspicious Activity Reporting evaluation environment segment architecture, which is intended to assess selected architectural concepts supporting the business processes, procedures, and policies associated with a nationwide Suspicious Activity Reporting capability, among other things. In addition, as described subsequently in this report, three ISE agencies have developed information sharing segment architectures, which are intended to identify common ISE services, standards, and other ISE tools to allow for opportunities to reuse and leverage services among ISE departments and agencies. Although the ISE architectural guidance and products provide some information to guide information sharing activities at the five key ISE implementing agencies, they do not fully describe the ISE’s current and future environment for business processes, data and information, applications and services, technology, and security consistent with relevant guidance. For example, the EAF identifies 24 current ISE business processes and describes activities and information flows for 3 current business processes. However, it does not describe business activities and information flows for the remaining 21 current business processes, such as the business process that supports responding to a terrorism-related threat. These information flows are important for identifying specific terrorism data needed to be shared among the ISE business processes and establishing mutually understood data definitions and structures to facilitate data integration across the ISE. Without such common definitions and structures, ISE agencies risk needing to invest significant time and resources to interpret and restructure data received from multiple systems supporting different ISE business processes. Moreover, the ISE EAF describes some aspects of the future technology environment, such as a set of technical standards that has been identified for use in planning, implementing, and deploying ISE information technology infrastructure, but it does not describe the ISE’s current technology environment (e.g., the existing databases and communications networks that support the Alerts, Warnings, and Notifications business process). In addition, an ISE enterprise sequencing plan that describes the interdependent activities to be undertaken by the Program Manager and ISE agencies to incrementally achieve the target ISE does not exist. As a result, ISE agencies and the Program Manager risk not synchronizing or integrating their interdependent ISE activities to inform timely initiation of ISE projects or development of ISE policies and procedures. Appendix III provides a detailed analysis and descriptions of the ISE architectural content reflected in the EAF and associated architectural documents. If managed properly, an EA program can help simplify, streamline, and clarify the interdependencies and relationships among an enterprise’s diverse mission and mission-support operations and information needs, including its associated information technology environment. However, the Office of the Program Manager’s approach to managing ISE architecture-related activities does not fully satisfy the core elements described in our EAMMF for establishing institutional commitment and creating the EA management foundation. Of the 13 core elements spanning these two stages that we reviewed, 1 was fully satisfied, 9 were partially satisfied, and 3 were not satisfied. (See app. IV for a detailed description of each core element and our analysis of the extent to which each has been satisfied.) For example, in consultation with the ISA IPC, proactive steps have been taken to address EA-related cultural barriers, such as parochialism and cultural resistance among ISE agencies. However, an EA program management plan that, among other things, reflects ISE EA program work activities, events, and time frames and defines accountability mechanisms does not exist. As a result, ISE agencies risk not budgeting and allocating adequate resources for ISE work activities, and risk delaying the start or completion of their ISE work activities because of a lack of information about the activities and events associated with the ISE EA program. Regardless of the architectural approach used for the ISE, establishing the EA management foundation is important for guiding the development of ISE architecture products to effectively support ISE implementation efforts. Finally, agency-specific information sharing segment architectures, which according to ISE guidance are to be developed to identify common ISE services, standards, and other ISE tools to allow for opportunities to reuse and leverage services among ISE departments and agencies, have not been fully defined. According to the Program Manager’s July 2010 annual report to Congress, ODNI and State have not developed such segment architectures. In its technical comments on a draft of this report, ODNI acknowledged that it does not have an information sharing segment architecture, and is working to make data sharable through Intelligence Community policies. For example, Intelligence Community Directive 501 states that all information collected and analysis produced by a member of the intelligence community shall be made available for automated discovery by authorized Intelligence Community personnel, consistent with applicable law and in a manner that protects fully the privacy rights and civil liberties of all U.S. persons. Also according to the Program Manager’s July 2010 annual report to Congress, DOJ, DHS, and DOD have taken steps to develop their respective segment architectures. However, the DOJ, DHS, and DOD information sharing segment architectures are all missing important content. For example, none of these three departments has fully defined the needed business and information requirements. (The extent to which these three departments have developed their information sharing segment architectures is described in app. V.) As a result, there may be an insufficient basis for identifying opportunities to avoid duplication of effort and launch initiatives to establish and implement common, reusable, and interoperable solutions and services across the ISE to achieve cost savings. The ISE EAF is intended to establish a strategic road map that enables ISE departments and agencies to further develop their respective EAs in order to implement information sharing capabilities. However, as we have previously reported, high-level EA frameworks and guidance, such as OMB’s federal EA, do not necessarily provide sufficient content for guiding the implementation of systems. The ISE EAF and associated architectural documentation also do not (1) provide sufficient architectural content (e.g., descriptions of ISE business processes and interagency information exchange requirements) necessary for ISE agencies to develop their information sharing architectures or (2) include an ISE enterprise sequencing plan that would serve as an effective road map for ISE departments and agencies. In addition, officials from the key ISE implementing agencies indicated that the lack of detailed and implementable ISE guidance was one limiting factor in developing agency information sharing segment architectures. Improved ISE architecture content and an ISE enterprise sequencing plan could enable better planning for the distributed ISE and allow for implementation of ISE capabilities in manageable pieces. The Program Manager stated that his office and OMB are using a standardized EA framework and method for the ISE to identify critical business processes and interfaces, establish standards for data formats, identity management and credentialing, and exchange protocols for information sharing between enterprises in a manner that permits each department and agency to satisfy ISE requirements while also optimizing its own EAs for its specific missions. The Program Manager added that this approach is based on (1) OMB decisions to establish a standardized EA framework that departments and agencies that own their respective information systems and architectures could use to develop, modify, and integrate those systems into the ISE; (2) the Office of the Program Manager’s interpretation of the Intelligence Reform Act; and (3) the Office of the Program Manager’s understanding that a full EA must be organization based and tied to budget authority. Nevertheless, the Intelligence Reform Act calls for the Program Manager to plan for and oversee the implementation of the ISE and to assist in the development of policies, as appropriate, to foster the development and proper operation of the ISE. It further calls for the Program Manager to issue governmentwide procedures, guidelines, instructions, and functional standards, as appropriate, for the management, development, and operation of the ISE, consistent with the direction and policies issued by the President, the Director of National Intelligence, and the Director of OMB. In addition, the Chief Information Officers Council has previously reported that a well-defined EA can promote better planning and facilitate management of an extensive, complex environment. Moreover, as described previously in this report, our EAMMF recognizes that EAs can be developed in a distributed manner and accordingly does not prescribe a specific approach by which organizations should develop needed EA content. By not ensuring that an improved EA management foundation for the ISE exists, the federal government, as a whole, is not well positioned to realize the significant benefits that well-defined ISE EA guidance and products can provide. Such benefits include better planning for ISE implementation; improved decision making regarding capability enhancement and resource allocation across the ISE enterprise; increased collaboration on interdependent ISE work activities; and effective sharing of critical terrorism information among appropriate ISE agencies and state, local, and tribal governments and private sector entities. The ISE is to fulfill a critical purpose in a time when acts of terrorism on U.S. soil have recently been attempted or planned. The Program Manager and key agencies have taken actions to define and implement the ISE, such as developing a framework to advance an initial set of goals, activities, and metrics. However, they also recognize that these actions do not yet go far enough to define and implement a fully functioning ISE and that there is more work to do. In addition, our work has identified actions that are needed after the end state vision for the ISE is defined, such as ensuring that all relevant information sharing initiatives across the five communities are fully leveraged by the ISE, consistent with the Intelligence Reform Act. This could help to ensure that all critical information with a possible nexus to terrorism is being shared as needed, and that relevant agency initiatives are considered to determine how they could be leveraged by the ISE for the benefit of all stakeholders, thereby helping to improve information sharing governmentwide. Also, to the extent possible, defining incremental costs necessary to implement the ISE, consistent with the Intelligence Reform Act, could help decision makers plan for and prioritize future investments. Further, while the Program Manager has taken steps to measure and demonstrate results of ISE efforts, additional actions are needed to address our prior recommendations to ensure that the ISE is on a measurable track to success and to show the extent to which the ISE has been implemented, what work remains, and milestones for completing remaining work. The Program Manager and ISE agencies have developed architecture guidance and products—such as the EAF—to assist in implementing the ISE, but crucial work remains. The guidance and products provide some foundational information about the ISE, but they do not fully define the suite of ISE architecture products that describe the ISE current and future operational and technical environment to support ISE implementation. Further, ISE EA management practices do not fully address the core elements described in our EAMMF, such as establishing an EA program management plan that, among other things, reflects ISE EA program work activities, events, and time frames and defines accountability mechanisms. Moreover, it is unclear when, how, and by whom these core elements will be satisfied and missing architecture content—such as business activities and information flows, the ISE technology environment, and an enterprise sequencing plan—will be developed. Establishing an improved EA management foundation, including well- defined EA guidance for the ISE, would better position the government to realize significant benefits, such as better planning for implementation, improved decision making, and ultimately more effective sharing of critical terrorism-related information among all ISE agencies. To help ensure effective implementation of the ISE, we recommend that the Program Manager, with full participation from relevant stakeholders, take the following three actions. To support future progress in developing and implementing the ISE, we recommend that after the end state is defined, the Program Manager in consultation with the ISA IPC and key ISE agencies, determine to what extent relevant agency initiatives across all five communities could be better leveraged by the ISE so that their benefits can be realized governmentwide and in coordination with the ISA IPC and OMB, task the key ISE agencies to define, to the extent possible, the incremental costs needed to help ensure successful implementation of the ISE and prioritize investments. To better define ISE EA guidance and effectively manage EA activities to support ISE implementation efforts, we recommend that the Program Manager, in consultation with the ISA IPC and key ISE agencies, establish an ISE EA program management plan that (1) reflects ISE EA program work activities, events, and time frames for improving ISE EA management practices and addressing missing architecture content and (2) defines accountability mechanisms to help ensure that this program management plan is implemented. We provided a draft of this report for comment to the Program Manager for the ISE, OMB, DHS, DOJ, State, DOD, and ODNI. Based on subsequent discussions with officials from the Office of the Program Manager, we revised portions of the draft that discuss the ISE EA and the related recommendation to clarify that our focus is primarily on architectural management practices and that various approaches can be used for structuring an EA. We received written responses from the Program Manager and DHS, which are summarized below and reprinted in appendix VI and appendix VII, respectively. Also, on June 17, 2011, the Federal Chief Enterprise Architect and other OMB officials provided oral comments. The Program Manager and Federal Chief Enterprise Architect generally agreed with the three recommendations in this report, while DHS did not address them. The Program Manager, DHS, DOJ, and ODNI provided technical comments, which we have incorporated in this report where appropriate. State and DOD informed us that they had no comments. The Program Manager’s written comments did not specifically mention whether he agreed with the three recommendations in this report, but the Office of the Program Manager subsequently confirmed via e-mail on July 7, 2011, that the Program Manager generally agreed with all of them, with elaboration as follows. The Program Manager generally agreed with the first recommendation related to the need to determine to what extent relevant agency initiatives across all five communities are being leveraged by the ISE. He noted that the Program Manager and the ISA IPC have already leveraged a great number of initiatives that support the realization of the ISE and that they will continue to identify and leverage agency initiatives to improve information sharing. The Program Manager provided numerous examples of activities that he said have been leveraged by the ISE and referred us to the annual reports to Congress for more examples. We recognize that the examples provided illustrate agency initiatives to share information and several of them are discussed in this report. In general, however, the Program Manger has not demonstrated how these initiatives are being leveraged by the ISE for the benefit of all stakeholders and to help improve information sharing governmentwide. The Program Manager expects the updated National Strategy for Information Sharing— complemented by follow-on implementation policy, programmatic and budgetary guidance, and performance metrics—to address this recommendation. The updated strategy and follow-on guidance and metrics could address the intent of the recommendation if they appropriately discuss how initiatives are being leveraged by the ISE. The Program Manager generally agreed with the second recommendation related to the need to define incremental costs for the ISE. However, he noted that OMB has the role of providing programmatic guidance and collecting budgetary requirements, and ensuring that they are integrated into the budget for each federal department and agency. The Program Manager also said that it is critical to note that federal departments and agencies own, plan for, and manage their programs, systems, and architectures, while the Office of the Program Manager provides the integrating guidance through the ISA IPC. Further, he noted that the individual departments and agencies are responsible for identifying costs over and above their program baselines to extend the benefits of information sharing throughout the ISE. We recognize that OMB and agencies play key roles in defining incremental costs for the ISE. Nevertheless, the Program Manager is responsible for leading and coordinating these efforts, as envisioned by the Intelligence Reform Act. Thus, we believe that the Program Manager is the appropriate party to task key ISE agencies to define, to the extent possible, the incremental costs needed to help ensure successful implementation of the ISE. The Program Manager expects the updated National Strategy for Information Sharing and other activities—including programmatic and budgetary guidance—to address this recommendation. The updated strategy and follow-on guidance could address the intent of the recommendation if they support defining incremental costs needed to help ensure successful implementation of the ISE. The Program Manager generally agreed with the third recommendation related to the need to more fully define ISE EA plans. He stated that the ISE needs an integrated plan with an established vision, goals, policy framework, performance management framework, and guidelines. From a planning perspective, the Program Manager noted that the National Strategy on Information Sharing—to be updated in the near future— followed by an integrated suite of implementation guidance and practices (e.g., the ISE EAF and the PAIS) provide the tools to effectively manage the ISE. He added that through these and other documents, the Office of the Program Manager will establish the vision, a program management plan, and an executable road map for the ISE. Further, he noted that the office will work with ISE departments and agencies to identify and prioritize their projects in support of the ISE. These actions could address the intent of the recommendation if the strategy and suite of implementation guidance and practices establish an ISE EA program management plan that (1) reflects ISE EA program work activities, events, and time frames for improving ISE EA management practices and addressing missing architecture content and (2) defines accountability mechanisms to help ensure that this program management plan is implemented. The Program Manager also provided comments indicating that much of this report treats the ISE as a centrally designed and defined information system enterprise and stated that our analysis looks for the tools and processes applicable to such an enterprise. This report and the EAMMF that comprises the basis for much of our analysis recognize that various approaches for structuring an EA exist and can be applied to the extent that they are relevant and appropriate for a given enterprise. As stated in this report and our EAMMF, these approaches generally provide for breaking down an enterprise into its logical parts and allowing various components of an enterprise (e.g., ISE mission partners) to develop their respective parts of the EA in relation to enterprisewide needs and the inherent relationships and dependencies that exist among the parts. For example, this report acknowledges agency-developed information sharing segment architectures—which can represent a portion of an ISE EA—and states that improved ISE architecture content and an ISE enterprise sequencing plan could enable better planning for the distributed ISE. In addition, the Program Manager stated that he consulted with the key ISE agencies and they agreed that they do not need or want the Program Manager to establish additional ISE EA guidance or an ISE EA. As we previously noted, various approaches can be used for structuring an EA. However, our work showed that agency information sharing architectures were either not developed or incomplete, and that pertinent officials from ISE agencies cited the lack of detailed and implementable ISE guidance as one factor limiting their efforts to develop agency information sharing architectures. Thus, we believe that an ISE EA program management plan is needed that (1) reflects ISE EA program work activities, events, and time frames for improving ISE EA management practices and addressing missing architecture content and (2) defines accountability mechanisms to help ensure that this program management plan is implemented. In addition to providing comments on each of the three recommendations, the Program Manager noted that the draft report did not fully address the roles and responsibilities of OMB and the departments and agencies that support the ISE, and that recognizing the key roles played by these entities is pivotal to assessing progress in the ISE. He explained that OMB plays a key role in the planning, budgeting, and oversight of the federal agencies and their contributions to the ISE. He also noted that it is primarily through the partnership between OMB and the Office of the Program Manger that program direction, funding, and performance measurement can be effectively achieved. He added that departments and agencies (1) are responsible for developing, deploying, modifying, and maintaining their respective information system investments and associated EAs and (2) play an active role in determining the policies, priorities, and direction of the ISE—originally through the Information Sharing Council—and are now an integral part of the ISA IPC. Further, the Program Manager noted that the information they share and the tools used to share it are by their nature a part of the ISE, regardless of whether the process is identified by the Program Manager. We recognize that OMB and agencies play important roles in defining and building the ISE. Nevertheless, the Program Manager is responsible for leading and coordinating these efforts, in accordance with the Intelligence Reform Act. Thus, we directed the recommendations to him, in consultation with the ISA IPC, key ISE agencies, and OMB as appropriate. In oral comments provided on June 17, 2011, the Federal Chief Enterprise Architect and other OMB officials generally agreed with all three recommendations in this report. Regarding the first recommendation to ensure that agency initiatives are leveraged, the Federal Chief Enterprise Architect noted that all five ISE primary areas of focus (homeland security, law enforcement, foreign affairs, defense, and intelligence) are important and that the Program Manager should continue to ensure effective coordination of these communities. He added that such coordination should occur in consultation with OMB and appropriate agencies at the federal, state, local, and tribal levels. Regarding the second recommendation to identify incremental costs, the Federal Chief Enterprise Architect noted that the Program Manager should work in collaboration with OMB and federal agencies to identify investments that are related to the ISE, and ensure that waste and duplication are not occurring and that the execution of the program is consistent with legal mandates and administration policies and priorities. Regarding the third recommendation to more fully define ISE EA plans, the Federal Chief Enterprise Architect agreed that our EAMMF was appropriate for evaluating the ISE EA and that the Office of the Program Manager should issue an EA program management plan that contains milestones, time frames, and accountability mechanisms. He noted that the Program Manager and ISE agencies each have a role in developing ISE architecture products. In its written comments, DHS noted that the department remains committed to continuing its work with the Program Manager and relevant stakeholders to further define and implement a fully functioning ISE. DHS added that the department is engaged with the Program Manager on a number of key initiatives at the ISA IPC to ensure the realization of information sharing benefits governmentwide. We are sending copies of this report to the Program Manager for the Information Sharing Environment; the Director of National Intelligence; the Director of the Office of Management and Budget; the Secretaries of the Departments of Defense, Homeland Security, Justice, and State; and appropriate congressional committees. This report also is available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions about this report, please contact Eileen R. Larence at (202) 512-6510 or larencee@gao.gov or David A. Powner at (202) 512-9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are acknowledged in appendix VIII. Our reporting objectives were to review to what extent the Program Manager for the Information Sharing Environment (ISE) and key stakeholder agencies have (1) made progress in developing and implementing the ISE, and what work remains, and (2) defined an enterprise architecture (EA) to support ISE implementation efforts. The stakeholder agencies we reviewed are the five agencies that the Program Manager identified as critical to developing and implementing the ISE— the Departments of Homeland Security (DHS), Justice (DOJ), State (State), and Defense (DOD) as well as the Office of the Director of National Intelligence (ODNI). These agencies represent five information sharing communities identified that collect the homeland security, law enforcement, foreign affairs, defense, and intelligence information deemed critical for sharing in order to provide for homeland security. To determine the extent to which the Program Manager and stakeholder agencies have made progress in developing and implementing the ISE, we reviewed key statutes and policies, including the Intelligence Reform and Terrorism Prevention Act of 2004 (Intelligence Reform Act) and the Implementing Recommendations of the 9/11 Commission Act of 2007. We also reviewed our prior reports and best practices identifying effective program management, federal coordination, and cost estimation. Through our review of these laws, guidance, and reports, we identified standards and best practices for program and project management and used them to inform our assessment of efforts to develop and implement the ISE and related efforts. We used semistructured interviews to gather information from the key agencies and facilitate analysis of their perspectives on the development of and remaining challenges impeding implementation of the ISE. We also used interviews to obtain information from these agencies on the status of key activities the Program Manager identified as accomplishments in the 2009 and 2010 ISE annual reports to Congress, among other things. In addition, we reviewed and analyzed agency documentation on guidance and plans and conducted interviews with agency officials to assess actions taken by the Program Manager to address recommendations in our 2008 report related to defining the purpose and scope of the ISE and the results to be achieved. To determine to what extent the Program Manager for the ISE and key stakeholder agencies have defined an EA to support ISE implementation efforts, we examined the extent to which (1) key current, or “as-is,” and future, or “to-be,” EA content and a plan for transitioning from the current to the future environment have been established; (2) the Office of the Program Manager has established a structure for effectively managing ISE architecture development and implementation; and (3) key federal agencies have defined their information sharing segment architectures (ISSA) to support ISE implementation. To determine the extent to which key current and future EA content, and a plan for transitioning from the current to the future environment has been established, we compared ISE architecture guidance, such as the ISE Enterprise Architecture Framework (EAF) and associated documents, to relevant EA content guidance. We also interviewed officials from the Office of the Program Manager, including the Program Manager and the Executive for Programs and Technology, as well as officials from the key federal agencies, to determine, among other things, their perspectives on ISE architecture content. In addition, we met with Office of the Program Manager officials to discuss variances between ISE EA content reflected in the ISE EAF and associated documents and EA content expectations established in relevant federal guidance. To determine the extent to which the Office of the Program Manager has established a structure for effectively managing ISE architecture development and implementation, we used our Enterprise Architecture Management Maturity Framework (EAMMF), and determined the extent to which the Office of the Program Manager has satisfied key elements associated with providing institutional leadership and foundational management capabilities. To make this determination, we reviewed relevant ISE documentation, including Executive Order 13,388 (October 25, 2005); the December 16, 2005, presidential memorandum regarding Guidelines and Requirements in Support of the Information Sharing Environment; the Intelligence Reform Act; Program Manager guidance; and Chief Architects Roundtable and Common Information Sharing Standards working groups’ meeting minutes. We also interviewed officials from the Office of the Program Manager and compared documentation collected and information provided during interviews to determine the extent to which the office and the Information Sharing and Access Interagency Policy Committee addressed EAMMF elements associated with establishing institutional commitment and direction and creating the management foundation for EA development and use. We did not evaluate the extent to which the ISE architecture program had adequate staff and budget resources because of the lack of a stand-alone budget for the ISE program and the classified nature of the ODNI budget. To determine the extent to which key federal agencies have defined their ISSAs to support ISE implementation, we determined the extent to which agency-developed ISSAs have addressed ISE architecture guidance established by the Office of the Program Manager. Specifically, we determined key ISSA development steps defined in the Program Manager’s Profile and Architecture Implementation Strategy that are consistent with best practices documented in the Federal Segment Architecture Methodology. We then reviewed the agency-developed ISSAs and relevant supporting documentation, such as information sharing strategies and information sharing implementation plans, against these key ISSA development steps. We also interviewed officials from DOD (the Office of the Assistant Secretary for Defense, Networks and Information Integration/DOD Chief Information Officer (CIO)), DHS (Office of the CIO), DOJ (Justice Management Division/Office of the CIO), and State (Office of Management Policy, Rightsizing, and Innovation) to understand the reasons why the agency-developed ISSAs have yet to fully address the key ISSA development steps. We conducted this performance audit from October 2009 through July 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To better define and manage ISE implementation, the Program Manager adopted the ISE framework to guide development of the ISE going forward. Specifically, the framework identified four goals and 14 specific subgoals or activities agencies were to pursue. The goals and subgoals follow. Subgoal 1.1: Information sharing is exhibited across departments and agencies as a routine part of doing business and recognized as an imperative to success. Subgoal 1.2: All personnel charged with sharing terrorism-related information are trained to carry out information sharing responsibilities. Subgoal 1.3: Employees are routinely recognized and rewarded for effective information sharing, as well as expertise and competency development. Subgoal 2.1: Federal departments and agencies practice security reciprocity among federal, state, local, and private sector entities, including people, facilities, and systems. Subgoal 2.2: Consistent marking and handling of controlled unclassified information is practiced across the U.S. government; practices are also adopted by state, local, tribal, and private sector entities. Subgoal 2.3: ISE participants build trusted distributed infrastructure for sharing information with all other participants, and are able to leverage repeatable processes from each others’ architecture programs to maximize availability of common ISE shared services. Subgoal 2.4: ISE departments and agencies; state, local, and tribal governments; and the private sector protect privacy in a consistent manner. Subgoal 3.1: All federal, state, local, tribal, and law enforcement entities operating domestically participate in a standardized, integrated approach to gathering, documenting, processing, analyzing, and sharing terrorism- related suspicious activity information. Subgoal 3.2: A national, integrated network of state and major urban area fusion centers that enables federal, state, local, tribal, and private sector organizations to gather, document, process, analyze, and share relevant information in order to protect our communities. Subgoal 3.3: Federal agencies produce, share, and disseminate both time-sensitive and strategic information and intelligence products that meet state, local, tribal, and private sector needs. Subgoal 3.4: Federal departments and agencies have implemented appropriate policies and processes to coordinate and facilitate the sharing of information with foreign governments and allies. Subgoal 4.1: Integrated performance and investment processes monitor progress toward performance goals and successfully use investments to support activities that maintain or enhance information sharing. Subgoal 4.2: ISE participants sustain their investments in information systems that support a trusted, distributed infrastructure for sharing information. Subgoal 4.3: ISE participants use common practices and policies for producing, handling, and using information. According to relevant guidance, an enterprise architecture (EA) should describe architectural views of the business processes, data, applications and services, technology, and security for the enterprise’s current and future environments. An EA should also include a sequencing plan for transitioning from the current environment to the future environment. Table 1 describes the extent to which the Information Sharing Environment (ISE) architecture documents address such relevant EA guidance. Table 2 describes the Information Sharing Environment’s (ISE) satisfaction of selected core elements in stages 1 and 2 of our Enterprise Architecture Management Maturity Framework (EAMMF). Tables 3, 4, and 5 provide a summary of Department of Defense (DOD), Department of Homeland Security (DHS), and Department of Justice (DOJ) efforts to address the key segment architecture development steps. In addition to the contacts named above, Eric Erdman, Assistant Director; Anh Le, Assistant Director; David Alexander; Justin Booth; R.E. Canjar; Katherine Davis; R. Denton Herring; Michael Holland; Ashfaq Huda; Thomas Lombardi; Linda Miller; Victoria Miller; Krzysztof Pasternak; Karl Seifert; Adam Vodraska; and Michelle Woods made key contributions to this report.
Recent planned and attempted acts of terrorism on U.S. soil underscore the importance of the government's continued need to ensure that terrorism-related information is shared in an effective and timely manner. The Intelligence Reform and Terrorism Prevention Act of 2004, as amended, mandated the creation of the Information Sharing Environment (ISE), which is described as an approach for sharing terrorism-related information that may include any method determined necessary and appropriate. GAO was asked to assess to what extent the Program Manager for the ISE and agencies have (1) made progress in developing and implementing the ISE and (2) defined an enterprise architecture (EA) to support ISE implementation efforts. In general, an EA provides a modernization blueprint to guide an entity's transition to its future operational and technological environment. To do this work, GAO (1) reviewed key statutes, policies, and guidance; ISE annual reports; and EA and other best practices and (2) interviewed relevant agency officials. Since GAO last reported on the ISE in June 2008, the Program Manager for the ISE and agencies have made progress in implementing a discrete set of goals and activities and are working to establish an "end state vision" that could help better define what the ISE is intended to achieve and include. However, these actions have not yet resulted in a fully functioning ISE. Consistent with the Intelligence Reform and Terrorism Prevention Act of 2004 (Intelligence Reform Act), the ISE is to provide the means for sharing terrorism-related information across five communities--homeland security, law enforcement, defense, foreign affairs, and intelligence--in a manner that, among other things, leverages ongoing efforts. To date, the ISE has primarily focused on the homeland security and law enforcement communities and related sharing between the federal government and state and local partners, to align with priorities the White House established for the ISE. It will be important that all relevant agency initiatives--such as those involving the foreign affairs and intelligence communities--are leveraged by the ISE to enhance information sharing governmentwide. The Program Manager and agencies also have not yet identified the incremental costs necessary to implement the ISE--which would allow decision makers to plan for and prioritize future investments--or addressed GAO's 2008 recommendation to develop procedures for determining what work remains. Completing these activities would help to provide a road map for the ISE moving forward. The administration has taken steps to strengthen the ISE governance structure, but it is too early to gauge the structure's effectiveness. The Program Manager and ISE agencies have developed architecture guidance and products to support ISE implementation, such as the "ISE Enterprise Architecture Framework," which is intended to enable long-term business and technology standardization and information systems planning, investing, and integration. However, the architecture guidance and products do not fully describe the current and future information sharing environment or include a plan for transitioning to the future ISE. For example, the EA framework describes information flows for only 3 of the 24 current business processes. In addition, the Program Manager's approach to managing its ISE EA program does not fully satisfy the core elements described in EA management best practices. For example, an EA program management plan for the ISE does not exist. The Program Manager stated that his office's approach to developing ISE architecture guidance is based on, among other things, the office's interpretation of the Intelligence Reform Act. Nevertheless, the act calls for the Program Manager to, among other things, plan for and oversee the implementation of the ISE, and officials from the key agencies said that the lack of detailed and implementable ISE guidance was one limiting factor in developing agency information sharing architectures. Without establishing an improved EA management foundation, including an ISE EA program management plan, the federal government risks limiting the ability of ISE agencies to effectively plan for and implement the ISE and more effectively share critical terrorism-related information. GAO recommends that in defining a road map for the ISE, the Program Manager ensure that relevant initiatives are leveraged, incremental costs are defined, and an EA program management plan is established that defines how EA management practices and content will be addressed. The Program Manager generally agreed with these recommendations.
PPACA included a number of provisions that changed requirements for small group health plans. For example, PPACA required that, beginning January 1, 2014, plans offer a set of minimum essential health benefits. PPACA also set standards for the percentage of total average costs that plans must cover for such benefits. The average costs covered by each plan are reflected in different plan levels, or tiers, and each tier is designated as bronze, silver, gold, or platinum. In addition, beginning on January 1, 2014, issuers are no longer able to consider the average health status of a particular group when setting premium rates and can only adjust premiums based on enrollment type (individual or family enrollment), geographic area, age, and tobacco use. Plans meeting these and other federal requirements, as well as other standards set by states, may be certified to be offered in an exchange; these plans are referred to as qualified health plans (QHPs). PPACA required all small group health plans to comply with these requirements as of January 1, 2014. However, in response to concerns regarding some issuers terminating plans that did not comply with PPACA requirements, CMS announced in November 2013 that it would provide transitional relief under which states could elect to permit issuers in their states to offer renewals of their noncompliant plans for a plan year beginning between January 1, 2014, and October 1, 2014, provided the plans met certain conditions. In March 2014, CMS extended this transitional policy through October 1, 2016, and noted that the agency may grant an additional 1 year extension, if necessary. PPACA also mandated the establishment of SHOPs in each state to allow small employers to compare available health insurance options in their states and facilitate the enrollment of their employees in coverage. Until 2016, states have the option to define small employers either as employers with 100 or fewer employees or employers with 50 or fewer To be eligible for SHOP coverage, a small employer must employees. offer coverage to all full-time employees in a QHP through a SHOP. To be eligible to enroll in a QHP through a SHOP, an individual must have been offered health insurance coverage by a qualified employer through a SHOP. Under PPACA, beginning in 2016, small employers will be defined in all states as those with 100 or fewer full-time equivalent employees. Beginning in 2017, states may allow issuers of health insurance coverage in the large group market—issuers offering coverage to groups of 101 or more full-time equivalent employees—to offer QHPs through the SHOP and, in turn, will allow large employers to obtain coverage through the SHOP. in the state by the participating issuers of health coverage. In addition, the benefits, cost-sharing features, and premiums of each QHP must be presented in a manner that facilitates comparison shopping of plans by small employers and their employees. Each SHOP must accept employer and employee applications through the SHOP website and may also accept applications over the phone, in person, or by mail. This application should collect the information necessary to screen an employer’s eligibility for SHOP participation and identify employees eligible to enroll in a QHP. Employers and employees may receive assistance to compare coverage options and complete applications through a qualified insurance agent or broker. In general, when offering coverage though a SHOP, employers select a plan, which becomes the employer's reference plan. Employers also decide the percentage they will contribute to the premiums for employees who select that plan, referred to as a defined contribution. Employees who have been offered a choice of plans are typically able to use the amount of the defined contribution for the reference plan when paying their premiums for a different SHOP plan. broader employee choices among multiple plans across different tiers. According to CMS, employee choice is intended to be a fundamental new benefit of SHOPs, in that small employers would be able to offer multiple plans from more than one issuer of health coverage, whereas traditionally most small employers have offered only one or a few plans from a single issuer. SHOPs were initially required to have the capacity to allow employers to provide employee choice beginning in 2014. However, under a final rule issued in June 2013, the requirement that SHOPs offer employee choice was first postponed to 2015, although SB-SHOPs retained the option of providing employee choice in 2014. The requirement was further postponed under a May 2014 final rule to 2016 in states that could demonstrate that postponing employee choice would be in the best interest of small employers and their employees and dependents, given the likelihood that implementing employee choice could cause issuers to price their products and plans higher than they would otherwise due to issuers’ beliefs about adverse selection. To provide an incentive for small employers to provide health insurance, and to make insurance more affordable, PPACA established a small business tax credit for certain eligible small employers offering coverage to their employees. The tax credit was available beginning in 2010, prior to the establishment of SHOPs. However, beginning in 2014, employers must offer coverage to their employees through the SHOP to be eligible for the credit. Beginning in 2014, employers are eligible for the credit for a maximum of 2 years. PPACA, § 1311(b)(1), 124 Stat. at 173 (codified at 42 U.S.C. § 18031(b)(1)). SHOPs and individual exchanges may change in future years. CMS officials stated that as of September 2014, Nevada was planning to begin using the FF-SHOP platform, while Idaho was planning to begin using its own state-based platform, in 2015. Though SHOPs were operational in all states as of June 1, 2014, many expected features were not yet available for a number of SHOPs. Enrollment for the SB-SHOPs, as of June 1, 2014 for most states, has been lower than expected, and CMS officials said they do not expect the enrollment trends for the FF-SHOPs to be significantly different, although they are still in the process of collecting enrollment data. Most SHOPs had multiple plans available in each county, though in some states there were a few counties with no plans available. Premiums varied across states, though were generally comparable to premiums for small group plans within the same state offered outside of the SHOPs. All of the FF-SHOPs and most of the SB-SHOPs were operational as required—that is, accepting enrollment applications—as of October 1, 2013. According to CMS, four of the SB-SHOPs—Hawaii, Maryland, Mississippi, and Oregon—were not operational as of the October 1, 2013, deadline, although all have since become operational. Hawaii became operational on October 15, 2013, Maryland became operational on April 1, 2014, and Oregon and Mississippi became operational on May 1, 2014. Websites where employers could review plan information, including premiums and benefits, were available on October 1, 2013, for all FF- SHOPs and most SB-SHOPs. This information allows employers and employees to make meaningful comparisons about available SHOP plans in their state. Plan information for the FF-SHOPs was provided by CMS through its website. According to CMS, on October 1, 2013, the SB- SHOPs in Maryland, Oregon, and Mississippi lacked websites where employers could review plan and premium information. Mississippi has since added plan and premium information to its websites. Oregon and Maryland have directed employers to contact agents and brokers or issuers to review plan options. According to CMS, most SB-SHOPs created online enrollment portals by October 1, 2013, though a handful of states—Maryland, Oregon, California, and Mississippi—did not have online enrollment portals available or had to take them offline, requiring employers to enroll directly through issuers. For example, the California SHOP initially offered online enrollment but took its enrollment portal down in February 2014 due to technical challenges, leaving small businesses in California able to enroll in SHOP plans only through direct enrollment. Online enrollment for the Mississippi SHOP began when it became operational in May 2014, while Maryland and Oregon have yet to implement online enrollment for their SHOPs. CMS did not implement online enrollment in the FF-SHOPs in 2014. As a result, employers enrolling in any of the FF-SHOPs, starting in October 2013, had to enroll in SHOP coverage either through agents and brokers or directly through issuers. CMS is currently preparing to implement online enrollment for the FF-SHOPs for 2015, and expects to launch online enrollment fully in all FF-SHOPs by November 15, 2014, when SHOP enrollment begins for 2015. The online enrollment system will allow, among other functions, enhanced features for agents and brokers, notification to employees of their employers’ annual open enrollment period, online employer payments, transmitting of enrollment and payment transactions to issuers, and the processing of coverage changes. According to CMS, fifteen SB-SHOPs offered employee choice in 2014 through a variety of approaches, though employee choice was delayed for the FF-SHOPs until 2015. These approaches included enabling employers to offer a choice of plans across all metal tiers and all issuers; a choice of plans across one metal tier but for multiple issuers; or a choice of plans from one issuer but across multiple metal tiers. Some states allowed the employer to choose which employee choice model to use, while other states only offered one approach. Four states, including California, required that employers offer their employees a choice of plans, while others, including Rhode Island and Kentucky, gave employers the option of choosing one plan or offering wider plan choice to their employees. The majority of enrolled employers in SB-SHOP states where data was available took advantage of the employee choice feature. For example, exchange officials in Kentucky and Rhode Island said that approximately 65 and 61 percent of enrolled employers, respectively, decided to offer their employees a choice of plans. In Rhode Island, in cases where employers offered the choice of any plans through the SHOP, just over 50 percent of their employees chose the reference plan the employer had selected, 14 percent selected a different plan within the same tier, 13 percent purchased a lower metal tier—or less expensive—plan than the reference plan, and 21 percent purchased a higher metal tier—or more expensive—plan than the reference plan. CMS officials said that two additional SB-SHOPs, Colorado and New York, reported that a majority of employers decided to offer their employees a choice of plans. However, three SB-SHOPs did not offer employee choice in 2014: Maryland, Massachusetts, and Oregon. Massachusetts was unable to offer employee choice because its online system for employee choice is still in development, according to CMS officials. Maryland and Oregon were unable to offer employee choice because SHOP enrollment was only available through direct enrollment, according to CMS and state officials, respectively. SB-SHOP plans had enrolled approximately 76,000 individuals—including employees, spouses, and dependent children—into plans purchased through 11,742 small employers, as of June 1, 2014 for most states, with end dates ranging from May to September 2014. Enrollment varied widely among the 18 states with SB-SHOPS, from 33,696 individuals (purchased through 3,580 small employers) in Vermont, to 1 individual (purchased through 1 small employer) in Mississippi. (See fig. 1.) Based on the average number of employees who enrolled in each state per small employer, it appears that the employer groups that enrolled in the SHOPs generally had few employees, particularly given that states allowed employers with as many as 50 employees to enroll in the SHOPs in 2014. Overall, the average number of employees per employer was 3.7, although the average number of employees enrolled per employer in each state varied. Employers that enrolled in Utah had the largest average number of employees enrolled per employer, 8.3, while New York had the smallest average number of employees enrolled per employer, 1.6. (See app. II for additional details on SB-SHOP enrollment.) Based on official estimates and stakeholders’ expectations, SB-SHOP enrollment—as of June 1, 2014 for most states, with end dates ranging from May to September 2014—was significantly lower than anticipated and, at its current pace, is unlikely to reach expectations by the end of 2014. that 2 million employees would enroll in coverage through the SB-SHOPs and FF-SHOPs in 2014, with the number of enrollees rising to 3 million in 2015 and leveling off at 4 million enrollees by 2017. In general, stakeholders we spoke with said that SHOP enrollment has been low, often lower than anticipated. For example, officials from the three SB- SHOPs we spoke to all said that enrollment has been low, with officials from two of the states indicating enrollment was lower than expected. Officials from the third state said challenges related to implementation and the lack of resources for marketing the SHOP had already lowered their expectations for enrollment, though they acknowledged that enrollment was generally low. Further, other stakeholders, including issuer, employer, and agent and broker representatives, also said that enrollment to date has been lower than anticipated. CMS officials cautioned against inferring future enrollment trends from the partial-year enrollment data for 2014. Officials said that employers may enroll in the SHOPs at any point in the year, unlike individuals pursuing coverage in the individual exchanges, which have limited open enrollment periods. According to the officials, more employers will likely become eligible for SHOP coverage in later months, when their existing non-SHOP plans end. Enrollment data for the FF-SHOPs was not yet available, though CMS officials reported that the agency was in the process of collecting the data from issuers. According to officials, because CMS was not ultimately prepared to implement online enrollment it has had to require each issuer involved in an FF-SHOP to manually report enrollment data. This data reporting role for issuers had not originally been anticipated, and so CMS has had to work with issuers to develop protocols to submit the data. CMS officials said that they are working on a system through which issuers can report their 2014 SHOP enrollment data, and that they expect to have initial data by fall of 2014 but will not have complete data for 2014 until early 2015. However, CMS officials said they do not have reason to expect major differences in enrollment trends for 2014 between the SB- SHOPs and the FF-SHOPs. Beginning in 2015, CMS officials said they plan to have online enrollment that will likely facilitate the more timely and accurate collection of enrollment data. In nearly all states, multiple issuers offered multiple plans in the SHOPs in 2014. The total number of participating issuers and plans in each state varied widely from 1 to 13 issuers and 3 to 320 plans. Forty-five states had more than one issuer participating in their SHOP and 31 states had 3 or more participating issuers and each issuer offered, on average, 12 plans in each rating area. In looking at silver-tier plans specifically, we found that most states had at least one silver-tier plan available in each county, though New York, Washington, and Wisconsin had counties where no silver-tier plans were available. Further, most states had at least two silver-tier plans available in each county. When looking at the total number of silver-tier plans, Washington, D.C., offered the most, with 89, while Arkansas, New Hampshire, and West Virginia each only had one silver-tier plan available. Regarding issuer participation in the SHOPs, we found that just over half of states offered silver-tier plans from two or more issuers in each county. Maryland had the most issuers offering silver-tier plans in its SHOP, with 13, though six states—Arkansas, Mississippi, New Hampshire, North Carolina, Washington, and West Virginia—had only one issuer offering silver-tier plans. The type of plan with the highest enrollment also varied across SB-SHOP states, as did the proportion of employees enrolling in these plans. The highest enrollment plans in each state were most often gold-tier. The plan with the highest enrollment was a gold-tier plan in seven states, a silver- tier plan in five states, and a platinum-tier plan in five states. The proportion of individuals enrolled in the highest enrollment plan ranged from approximately one-fourth of enrollees in California, Connecticut, Kentucky, and Vermont to less than 10 percent in Minnesota, New Mexico, New York, and Utah.highest enrollment SB-SHOP plans.) (See app. II for additional details on the Premiums for silver-tier plans varied within and across states, clear patterns emerged. Monthly silver-tier plan premiums for enrollees aged 21 ranged widely from $138 for the least expensive Hawaii plan to $523 for the most expensive Alaska plan, with the median plan costing $262. For enrollees aged 40, the monthly premiums varied from $176 for the least expensive Hawaii plan to $669 for the most expensive Alaska plan, with the median plan costing $335. Finally, for enrollees aged 60, the monthly premiums varied from $375 for the least expensive Hawaii plan to $1421 for the most expensive Alaska plan, with the median plan costing $711. The differences between the premiums of the most expensive and least expensive silver-tier plans within a given state also varied widely. Arizona had the largest difference, with the most expensive plan costing almost three times as much as the lowest-cost plan. North Carolina had the least disparate silver-tier plan premiums, with the most expensive plan costing only approximately five percent more than the (See app. III for additional details about SHOP least expensive plan.premium variation across states.) We focused our analysis of plan premiums on silver-tier plans. In the three states where we compared SHOP premiums to non-SHOP small group market premiums in the state, we found that premiums for silver-tier plans were generally comparable. PPACA requires that prices for identical plans within a given state be the same, regardless of whether plans are offered on or off the SHOP.said that SHOP premiums were generally comparable to premiums for plans outside of the SHOPs. As previously noted, PPACA requires that small group plans meet a number of requirements. These requirements limit the overall variability between PPACA-compliant plans offered within or outside of the SHOPs, including variation in premiums. Stakeholders we interviewed reported that the primary incentive for employers to use the SHOPs has been the small business tax credit. However, stakeholders identified several factors that may have hindered enrollment, thus leading to current low SHOP enrollment. Stakeholders also described factors that may help stimulate or detract from SHOP enrollment in the future. Many stakeholders, including issuer, employer, and agent and broker representatives we interviewed, reported that the primary incentive for employers to use SHOPs has been the small business tax credit. Employers must generally purchase coverage through a SHOP and meet certain other criteria, including having fewer than 25 employees, to be eligible for the credit, which they may receive for a maximum of two years beginning in 2014. Most employer group representatives reported that those small employers that were interested in and taking steps to enroll in the SHOPs were largely doing so in order to be eligible for the tax credit. Exchange officials in Kentucky also reported that the tax credit has likely been an important incentive for small employers enrolling in the state’s SHOP, and that most employers that had enrolled as of April 2014 had less than 25 employees, indicating that they may have been pursuing the credit. Similarly, as discussed previously, we found that the average number of enrolled employees in SB-SHOPs ranged from 1.6 to 8.3 employees, suggesting that many enrolled employers may have been eligible for the credit. However, several stakeholders noted that the tax credit is too small and administratively complex to motivate many small employers to enroll. CMS officials and one employer group representative noted that the temporary nature of the tax credit—that is, the fact that employers may receive the credit for only 2 years beginning in 2014—may deter some employers from offering coverage for the first time through the SHOP to obtain the credit. This is consistent with our prior work, which revealed low use of the credit even prior to the establishment of the SHOPs. In 2012, we reported that the take-up of the small business tax credit in tax year 2010, the first year the credit was offered, was much lower than the estimated number of eligible employers. According to tax preparers and other stakeholders we interviewed for that work, small employers likely did not view the credit as a sufficient incentive to begin offering health insurance, particularly given the complexity of, and time required to claim, the credit. Although the small business tax credit may have led some employers to enroll in the SHOPs, stakeholders identified several other factors that may have hindered enrollment, thus leading to current low SHOP enrollment. Delays in key SHOP features. Stakeholders, including representatives of national employer, agent and broker, and insurance commissioner groups, said that the delays in implementation of online enrollment and employee choice in the FF-SHOPs may have hindered SHOP enrollment. These key features, which have also been delayed in certain SB-SHOPs, are not typically available to small employers purchasing coverage through other means. According to stakeholders, until these key features are implemented, employers may not have as much incentive to enroll in coverage through the SHOP. Limited awareness of and misconceptions about SHOP availability. Many stakeholders, including state exchange officials and national- and state-level agent, broker, and employer representatives, reported a lack of employer awareness of the ability to enroll in SHOP plans beginning October 1, 2013, largely due to misconceptions about whether the SHOPs were open for enrollment and a lack of outreach by states and CMS. Stakeholders said that media reports announcing delays in certain SHOP features—in particular, the delays in FF-SHOP online enrollment and employee choice—led many employers to assume that the overall implementation of SHOPs was delayed and that enrolling in plans was not yet possible. In addition, stakeholders said that low awareness stemmed from a federal and state emphasis on highlighting the availability of the individual exchanges. For example, exchange officials from one SB-SHOP state noted that employer awareness of the SHOP remained low in part because the state initially focused outreach and marketing efforts on the individual exchange. However, exchange officials from this and another SB-SHOP state reported that with the end of their individual exchanges’ open enrollment periods, the states are now focusing outreach and marketing efforts on their SHOPs, which must provide for rolling enrollment. Renewal of existing, noncompliant plans. The majority of stakeholders—including national-level groups as well as stakeholders representing four of the five states included in our study—said that the ability for employers to renew their existing, non-PPACA compliant plans may have limited SHOP enrollment. National-level employer and agent and broker groups we interviewed said that most small employers chose to renew their existing plans in states where this was permitted, in part due to a general preference for the status quo, as well as other factors, such as concerns about potential premium increases associated with new plans. Kentucky, California, and Illinois exchange officials, as well as CMS officials who participated in the implementation of the SHOP in Illinois and Texas, said that the renewal of these plans may have limited SHOP enrollment in these states. Technical challenges and administrative burden. Some stakeholders said that the ongoing technical challenges and administrative burden associated with many of the SHOPs have served as a barrier to entry for employers, in part by discouraging some agents and brokers from recommending the SHOP to employers. National- and state-level employer group representatives reported hearing from small employers that they have avoided SHOPs due to technical challenges with SHOP websites, as well as administrative burdens, such as difficulty reaching customer service and, in some cases, the need to send application paperwork by mail. State exchange officials reported that they worked closely with agents and brokers to establish their SB-SHOPs, particularly given that most small employers have traditionally relied on agents and brokers when purchasing coverage for their employees. However, state exchange officials and other stakeholders, including CMS officials, noted that agents and brokers still faced challenges associated with using SHOPs. These challenges included, in some cases, poor or inaccessible customer service for brokers; poor training for brokers on SHOP requirements; the extra time required to explain SHOP requirements to clients; challenges receiving compensation; and the lack of a dedicated broker “portal” on some SHOP websites that would allow brokers to set up and help manage accounts for their clients.challenges such as these have led some agents and brokers to avoid recommending that their small employer clients use the SHOP. Despite the various factors that may have restrained SHOP enrollment to date, many stakeholders noted that certain other factors suggest that the SHOPs have the potential to experience future enrollment growth. According to some stakeholders, central to enrollment growth will be the phasing out of noncompliant plans, the resolution of the technical challenges and reduction of the administrative burden cited as hampering current enrollment, and the demonstration of a “value proposition” that gives employers a reason for preferring SHOP-based coverage to coverage available outside the SHOP. Stakeholders suggested several additional factors that could help stimulate future SHOP enrollment growth. Improved coordination with agents and brokers. Stakeholders, including CMS officials, state exchange officials, and issuer, employer, and agent and broker representatives, emphasized the importance of coordinating with and providing improved web- or phone-based tools to agents and brokers in order to facilitate SHOP enrollment. States and CMS reported taking steps to resolve certain challenges faced by agents and brokers when using the SHOP. For example, Kentucky exchange officials said that they are working to develop a tool that will allow agents and brokers to easily provide price quotes across multiple SHOP plans to their clients, and are considering allowing SHOP-certified agents and brokers to initiate applications on behalf of employers. Illinois exchange officials reported developing a dedicated section for agents and brokers on the state’s SHOP website through which agents and brokers can obtain updated information on the SHOP, in response to feedback from agent and broker community leaders. CMS officials said that establishing a broker portal for the FF-SHOPs is a key agency priority, and that the agency plans to have a broker portal in place when FF- SHOP online enrollment becomes available in fall 2014. According to CMS officials, the portal will, among other functions, allow agents and brokers to search for and communicate with employer clients; monitor employees’ enrollment progress; make changes to employee rosters; and receive messages regarding employers’ monthly invoices, including any late payment warnings. Availability of employee choice. Some stakeholders stated that the employee choice feature, when fully implemented in all states, will be a key value proposition for the SHOPs. For instance, CMS officials said that employers will likely value being able to offer employees a choice from among multiple plan and issuer options—an ability that small employers typically have not been able to offer. Employer group representatives reported that their members consider employee choice to be an important benefit of the SHOPs, as employees will be able to decide on their own the coverage that best suits their needs and, if necessary, will have the option to spend more to purchase more comprehensive plans. Evidence from Kentucky and Rhode Island, whose SHOPs offer, but do not require, the use of employee choice, further suggests that employers may value this feature. As discussed previously, according to state exchange officials, the majority of employers in Kentucky and Rhode Island that enrolled in the SHOPs chose to offer their employees the choice of multiple plans. However, some issuer representatives and other stakeholders were uncertain about the value of employee choice, noting that it is challenging for issuers to implement and that too many choices may be overwhelming for employers and employees. Representatives from national issuer and insurance commissioner groups reported that it is time consuming and expensive for issuers to build the information technology systems required for premium aggregation and other issuer-specific functions necessary for employee choice. In addition, issuers and other stakeholders have reported concerns regarding whether employee choice would lead to adverse selection among plans in the SHOP—a concern that has, in part, led some states to delay their implementation of employee choice until 2016. Increased marketing to employers. Some stakeholders said that states need to better market the SHOP to small employers to increase SHOP awareness. Some employer group representatives said states need to improve outreach by more aggressively targeting small employers and highlighting the value of the SHOPs in their marketing. Exchange officials in one state emphasized the importance of marketing the SHOPs to small employers as a product that offers value, rather than performing traditional outreach, which is more characteristic of public programs. The officials also noted that marketing must be conducted continuously throughout the year, given that employers renew their coverage at different points in the year. Robust issuer participation. Although stakeholders representing four of the five states included in our study reported that issuer participation has not been a challenge, CMS officials said that robust issuer participation will be important in ensuring the SHOPs’ long-term viability. Issuer representatives noted that, due to the requirement that certain issuers must participate in a state’s FF-SHOP if they wish to participate in its federally facilitated individual exchange, some issuers will be required to participate in the SHOPs. However, according to the representatives, other issuers may be hesitant to participate given factors such as uncertainties regarding delays in SHOP functionality and technical readiness; potential new requirements, such as those related to the adequacy of provider networks; the expense and complexity of implementing employee choice and other SHOP features; and the ability for employers to renew noncompliant plans. CMS officials said that although some issuers may be reluctant to participate in the FF-SHOPs in the early years of implementation, once information technology systems have been fully developed and refined, issuers may be more eager to participate. Expansion of the SHOPs to larger employers. Exchange officials in one state noted that as eligibility for SHOP enrollment expands to employers with up to 100 employees—which must occur no later than January 1, 2016—and, eventually, to larger employers in some states, additional employers may consider the SHOP as an option for purchasing coverage. However, based on the small average number of employees per employer enrolled in SB-SHOPs, it remains to be seen whether larger employers will enroll when given the opportunity. Financial sustainability of the SHOPs. Exchange officials in one state noted that an essential element of SHOP viability will be ensuring the SHOPs are financially sustainable. They and other SB-SHOP exchange officials we interviewed said that their states have proposed or finalized funding mechanisms in place. In two of the states, these funding mechanisms will draw from either all exchange plans—both individual and SHOP—or all health plans in the state; therefore, low initial SHOP enrollment is not likely to significantly affect SHOP operating revenues in those states.stakeholder expressed concern regarding whether SHOPs will be sustainable in the long run if enrollment remains low, particularly given the expense required to maintain the SHOPs’ information technology systems. Stakeholders also described factors whose future effects on SHOP enrollment are more uncertain or have the potential to detract from SHOP enrollment growth in the long term. Loss of the tax credit. As noted previously, stakeholders and our analysis of SB-SHOP enrollment data suggested that many employers currently enrolling in the SHOP may be eligible for the tax credit. However, employers may only receive the tax credit for a maximum of 2 years. It therefore remains to be seen whether employers will continue to purchase coverage through the SHOP once they have exhausted their ability to receive the tax credit, and how this will affect overall SHOP enrollment. One employer representative suggested that an extension of the credit, or a re- design of the credit such that larger businesses are eligible, will help ensure SHOP enrollment growth moving forward. Comparability of prices on and off the SHOP. Issuer representatives and other stakeholders noted that prices for SHOP plans are likely to remain similar to prices for non-SHOP small group plans, which may limit the incentive for small employers to enroll in coverage through the SHOP. According to stakeholders, small employers’ coverage decisions are largely driven by price. However, as we noted previously, premiums are currently similar for plans offered on and off the SHOP in part due to the requirement that prices for identical plans in a given state be the same, regardless of whether they are offered on or off the SHOP. In order for issuers to offer more competitive prices through the SHOPs, they must offer unique, SHOP-only plans that are lower in price when compared to non- SHOP options. However, according to issuer representatives, there are limited mechanisms by which issuers could do so. As PPACA requires that all plans offer a set of minimum essential health benefits, issuers are limited in the extent to which they can lower prices by restricting the benefits they offer in SHOP plans. In addition, though issuers could lower prices for SHOP plans by offering narrower provider networks, issuer representatives cite the high administrative costs of creating and maintaining new networks as a deterrent. Competition with private exchanges. Some stakeholders, including issuer, agent, and broker representatives, as well as exchange officials from one state, reported that private exchanges for small group coverage—or online health coverage marketplaces managed by private companies, such as issuers or benefits consulting firms— are becoming more prevalent and may compete with SHOPs for employer enrollment. According to some agent and broker representatives, private exchanges may appeal to employers because, in some cases, they offer employee choice—a key value proposition of the SHOP that has not yet been implemented in all states—without many of the requirements associated with the SHOP. Exchange officials from one state said that recently created private exchanges in that state have been able to spend more on advertising, which has made it difficult for the SHOP to compete. However, the officials noted that the SHOPs provide increased value in that they offer full transparency and choice of SHOP plans—whereas some private exchanges, despite claiming to offer full choice of plans, simply offer plans from the carrier operating the exchange. Possibility of sending employees to the individual exchanges. Exchange officials from two states and some agent and broker representatives reported that some small employers have chosen, or may choose in upcoming years, to drop coverage for their employees altogether, particularly in light of the availability of premium and cost- sharing assistance for eligible low- and moderate-income individuals obtaining coverage through an individual exchange. CMS officials said they have heard anecdotal information that this may be occurring, but have no data. Several recent employer surveys have found that, while the majority of surveyed small employers were not considering dropping coverage for their employees, a minority were intending to or considering whether to drop coverage and, in some cases, direct their employees to the individual exchanges in 2014 or 2015. Potential for adverse selection. If it were to occur, adverse selection between SHOP and non-SHOP plans could lead to increased SHOP premiums and thus inhibit SHOP enrollment in the long term. However, issuer representatives said that the similarity in premiums seen thus far has diminished concerns about such adverse selection. The issuer representatives said a greater concern is the risk for adverse selection between PPACA-compliant plans—that is, both the SHOP and non-SHOP plans that comply with PPACA’s insurance reforms—and the existing, noncompliant plans that have been renewed. This risk may be temporary, as CMS’s current transitional relief policy, under which noncompliant plans may continue to be offered, permits issuers of such plans to offer renewals of these plans only for plan years beginning on or before October 1, 2016. In addition, PPACA established mechanisms to mitigate adverse selection if it does occur among PPACA-compliant plans, and stakeholders said it will be at least 1 year, if not several years, before it becomes clear if adverse selection is occurring in the SHOPs, as well as how well these mechanisms will mitigate its effect. The SHOPs are an important element of PPACA, intended to provide a new mechanism by which small employers can shop for and purchase health insurance coverage for their employees and to offer features not typically available to the employees of small employers, such as the ability to choose among multiple health plans. While much progress has been made by CMS and states to ensure all SHOPs are now operational, early evidence suggests enrollment is significantly lower than anticipated amid the delayed availability of key functions among many SHOPs and misconceptions by employers about the availability of SHOPs. CMS officials and other stakeholders point to structural factors to help explain the current low enrollment—such as the temporary ability for employers in many states to renew their existing plans even if they do not comply with PPACA insurance reforms—and suggest reasons for optimism about future SHOP enrollment trends. They point to such factors as the phase- out of the noncompliant plans, the expected availability of online enrollment and employee choice functions in many more SHOPs, and intended CMS or state efforts to improve SHOP awareness and coordination with agents and brokers. Nevertheless, other factors may temper such optimism, such as the loss of the small business tax credit for some employers, the potential for adverse selection, and the challenge SHOPs may face in competing with plans offered to small employers outside of the SHOPs. These collective factors will vary across states and continue to evolve, suggesting that a determination of the long-term impact of the SHOPs remains premature at this time. We received comments from HHS on a draft of this report (see app. V). HHS described steps it is taking to improve the SHOP program based on lessons learned from the first year of operation and emphasized the future role SHOPs could play to produce more competition in the small group health insurance markets as the SHOPs improve and mature. HHS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact John E. Dicken at (202) 512-7114 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Vermont does not allow variation in plan premiums based on age. The West Virginia SHOP had only one plan available. In addition to the contact name above, Randy DiRosa, Assistant Director; Priyanka Sethi Bansal; Sandra George; Eagan Kemp; Laurie Pachter; and Kate Tussey made key contributions to this report.
The Patient Protection and Affordable Care Act required SHOPs—exchanges, or marketplaces, where small employers can shop for health coverage for their employees—to be established in all states. States may elect to establish and operate SHOPs themselves or allow CMS to do so within the state. Enrollment in SHOPs was to begin in October 2013, with coverage effective as early as January 2014. GAO was asked to examine the early implementation experiences of the SHOPs. In this report GAO describes (1) SHOP functionality, enrollment, plan availability, and premiums and (2) stakeholders' views on key factors that have affected current SHOP enrollment or may affect future enrollment growth. GAO reviewed relevant information from CMS and states, including data on employer and employee enrollment, plan availability, and premiums generally through June 1, 2014. GAO also interviewed representatives of key stakeholders that operate SHOPs (CMS and states), offer coverage in SHOPs (health insurance issuers), obtain coverage through SHOPs (small employers), or assist in obtaining coverage through SHOPs (agents and brokers) on a national basis and, for certain stakeholders, in five states—California, Illinois, Kentucky, Rhode Island, and Texas. The five states were selected based on factors including varied issuer participation levels and SHOP functionality. The experiences of these stakeholders cannot be generalized to other states or stakeholders. GAO incorporated HHS comments on a draft of this report as appropriate. Though all of the Small Business Health Options Programs (SHOPs) required by the Patient Protection and Affordable Care Act were operational, many features were not yet available and enrollment was low as of June 2014. According to the Centers for Medicare & Medicaid Services (CMS), the agency that oversees the SHOPs, all 33 of the SHOPs run by CMS (federally facilitated, or FF-SHOPs) and 14 of the 18 SHOPs run by states (state-based, or SB-SHOPs) were accepting enrollment applications as of the October 1, 2013, deadline. The remaining 4 SB-SHOPs became operational by the following May. Websites where employers could review plan information such as premiums and benefits were available on October 1, 2013, for all FF-SHOPs and most SB-SHOPs. Other key SHOP features—online enrollment and employee choice, the ability for employees to choose among multiple plans—were delayed for all FF-SHOPs, but available for most of the SB-SHOPs. CMS is currently preparing to implement online enrollment for all FF-SHOPs and employee choice for many of the FF-SHOPs for 2015. Based on official estimates and stakeholders' expectations, enrollment for the SB-SHOPs has been significantly lower than expected. The 18 SB-SHOPs had enrolled about 76,000 individuals—including employees, their spouses, and dependent children—in plans purchased through nearly 12,000 small employers, as of June 1, 2014, for most states. Enrollment data for the FF-SHOPs was not yet available, although CMS was in the process of collecting the data from issuers and expected to have complete data by early 2015. However, CMS officials said they do not expect major differences in enrollment trends for 2014 between SB-SHOPs and FF-SHOPs. Finally, most SHOPs had multiple plans available in each county, although a small number of states had counties with no plans available. Premiums for SHOP plans varied across states and were generally comparable to premiums for other small group plans offered within a state but outside of the SHOP. Stakeholders identified several factors that may have led to current low SHOP enrollment and that may affect future enrollment growth. Many stakeholders reported that the primary incentive for employers to use the SHOPs has been the small business tax credit available to eligible employers who offer coverage through a SHOP, although some noted that the credit may be too small and administratively complex to motivate many employers to enroll. Other factors identified that may have hindered current enrollment include the ability of employers to renew plans that existed before the SHOPs—which, depending on state requirements, is permitted until October 1, 2016—and employer misconceptions about SHOP availability. Stakeholders also described factors that may help stimulate or detract from future SHOP enrollment growth. For example, the phase-out of existing pre-SHOP plans, the implementation of employee choice by an increasing number of SHOPs, improved coordination with agents and brokers, and increased marketing to small employers may help stimulate enrollment growth. Conversely, other factors, such as the 2-year limit on the availability of the small business tax credit and the likelihood, according to stakeholders, that SHOP premiums will not be lower than non-SHOP premiums, may hinder future enrollment growth. The evolving and localized nature of these factors suggests that that a determination of the SHOPs' long-term impact remains premature at this time.
As required by law, each reserve component is to make available qualified personnel for active duty in the armed forces in time of war or national emergency and at such other times as national security requires. With this requirement comes the responsibility that each reserve component provides personnel who are medically and physically fit for active duty. As noted in DOD guidance, fitness specifically includes the ability to accomplish the task and duties unique to a particular operation, and ability to tolerate the environmental and operational conditions of the deployed location, including wear of protective equipment. DOD reserve components include the Army Reserve, the Army National Guard, the Air Force Reserve, the Air National Guard, the Navy Reserve, and the Marine Corps Reserve. Reserve forces consist of three categories: the Ready Reserve, the Standby Reserve, and the Retired Reserve. The Ready Reserve had approximately 1.1 million National Guard and Reserve members at the end of fiscal year 2004, and its members were the only reservists who were subject to involuntary mobilization under the partial mobilization authorized by President Bush following the attacks of September 11, 2001. Within the Ready Reserve, there are three subcategories: the Selected Reserve, the Individual Ready Reserve, and the Inactive National Guard. Members of all three subcategories are subject to a mobilization under a partial mobilization but routine medical and physical fitness policies apply primarily to the Selected Reserve, consisting of about 850,000 members at the end of fiscal year 2004. DOD administers medical examinations to military personnel for various reasons at different intervals. These include examinations at accession, mobilization, for special duty assignments, and at separation and retirement. The examinations that are required routinely for Selected Reserve members to ensure ongoing medical and physical fitness include two that are prescribed by federal statute and the second two prescribed by DOD regulations and policy. Compliance with these routine requirements is the first step toward determining who is fit for duty. Federal statute prescribes that each member of the Selected Reserve who is not on active duty is required to: be examined as to the member’s physical (medical) fitness every 5 years, or more often as the respective Secretary considers necessary; and complete an annual certificate of medical condition. DOD policy prescribes that each member of the Selected Reserve: receive an annual dental examination; and be evaluated annually for physical fitness for duty, to include an assessment of aerobic capacity, muscular strength, muscular endurance, and desirable fat composition. Within the constraints of the existing mobilization authorities and DOD guidance, the services have flexibility as to how, where, and when they conduct mobilization processing. As a result, the services differ in how they mobilize and consequently medically screen members upon notification that a unit or individual will be called to active duty. The Army and Navy use centralized approaches, mobilizing their reserve component forces at a limited number of locations. The Army uses 15 primary sites that it labels “power projection platforms” and 12 secondary sites called “power support platforms.” The Navy has 15 geographically dispersed Navy Mobilization Processing Sites but is currently using only 5 of these sites because of the relatively small numbers of personnel who are mobilizing. By contrast, the Air Force uses a decentralized approach, mobilizing its reserve component members at their home stations—135 for the Air Force Reserve and 90 for the Air National Guard—where all medical screening is performed. The Marine Corps uses a hybrid approach. It has five Mobilization Processing Centers to centrally mobilize individual reservists and is currently using three of these centers. However, the Marine Corps uses a decentralized approach to mobilize its units. Selected Marine Corps Reserve units do most of their mobilization processing at their home stations, including medical screening, and then report to their gaining commands. Within the Office of the Under Secretary of Defense for Personnel and Readiness, the Office of the Assistant Secretary of Health Affairs is responsible for developing medical policies and processes; the Principle Deputy to the Under Secretary oversees the Office of Morale, Welfare, and Recreation for developing physical fitness policies; and the Office of the Assistant Secretary for Reserve Affairs serves in an advisory capacity to the Under Secretary to determine how the reserve components can better implement these requirements. Each service’s Assistant Secretary for Manpower and Reserve Affairs provides force management policy for both the active and reserve components. It is then the responsibility of each National Guard and Reserve Command—the Chief, Army Reserve, the Director of the Army National Guard, the Chief of the Navy Reserve (Commander of Navy Reserve Forces and Commander of Marine Corps Reserve Forces), Chief of the Air Force Reserve, and the Director of the Air National Guard—that the policies for medical and physical fitness examinations are properly implemented for their respective commands. Each National Guard and Reserve unit commander is responsible for ensuring that the members under his or her command are provided routine medical and physical examinations in a timely manner, and for identifying and processing members who are not medically qualified or physically fit for active duty. The reserve component member is responsible for meeting scheduled medical examination requirements, obtaining any recommended follow-up medical and dental care from his or her personal (civilian) medical provider, and truthfully reporting any changes in his or her medical or dental condition to military unit commanders and military medical personnel. Upon mobilization, responsibility for the medical and physical fitness of the reserve component members transfers to the active duty counterparts. Several studies identified medical issues with the reserve component members called to duty for Operations Desert Storm and Desert Shield. A 1991 Army Inspector General report estimated that as many as 8,000 reserve component personnel were found to be medically nondeployable upon arrival at mobilization stations. Even though all but 1,100 eventually deployed, the nondeployable soldiers disrupted the mobilization process because units had to undergo extensive efforts to replace nondeployable reserve members with those who could be deployed. The report also noted that some soldiers who had coronary bypass surgery, cancer, and amputations had not been identified at their home stations and reported to their mobilization station. In 1991, we reported that medical screenings conducted at mobilization stations identified numerous problems that impaired soldiers’ ability to deploy, including ulcers, chronic asthma, spinal arthritis, hepatitis, seizures, and diabetes. In 1992, we reported that because many medical personnel were found nondeployable for various reasons, including medical reasons, many units deployed with medical personnel shortages and were not fully mission capable upon arrival in the Persian Gulf. For example, two reserve component surgeons—one who was unable to stand for more than 30 minutes and another who had Parkinson’s disease—reported for duty but were unable to deploy due to their conditions. A 1992 Sixth U.S. Army Inspector General report stated that many soldiers deployed to Southwest Asia had to return to the United States because of medical conditions that had not been previously diagnosed. This report noted that home unit commanders were not identifying soldiers with severe medical problems, some permanent, to determine if they were medically fit to perform their duties and job assignments before deploying. In 1994, we did a comprehensive review of the medical and physical fitness policies for reserve component members serving in Operations Desert Storm and Desert Shield and found that at one Army mobilization station nearly 4 percent of the reserve component members reporting for duty had serious medical conditions including cancer and heart disease. One soldier had double kidney failure, one had muscular dystrophy, and another had a gunshot wound to the head. We found that DOD medical policy, which permits the services to retain nondeployable reservists, was inconsistent with a military strategy that requires forces to be capable of responding quickly to unexpected military contingencies anywhere in the world and we recommended that DOD revise its policy that allows members not to be worldwide deployable, but DOD disagreed and did not take action. We also found that DOD was not aware of the physical fitness problems because the services were not reporting fitness information as DOD required and GAO recommended that DOD revise its directive to require services to report on their members’ physical fitness status. DOD concurred with our recommendations and agreed to take actions. Other related GAO products are found at the end of this report. Section 1074f of Title 10, United States Code requires that the Secretary of Defense establish a system to assess the medical condition of members of the armed forces (including members of the reserve components) who are deployed outside of the United States or its territories or possessions as part of a contingency operation or combat operation. It further requires that records be maintained in a centralized location to improve future access to records, and that the secretary establish a quality assurance program to evaluate the success of the system in ensuring that members receive pre- and postdeployment medical examinations and that record- keeping requirements are met. DOD policy requires that the services collect pre- and postdeployment health information from their members, and submit copies of the forms that are used to collect this information to the Army Medical Surveillance Activity (AMSA). Initially, deployment health assessments were required for all active and reserve component personnel who were on troop movements resulting from deployment orders of 30 continuous days or greater to land-based locations outside the United States that did not have permanent U.S. military treatment facilities. However, on October 25, 2001, the Assistant Secretary of Defense for Health Affairs updated DOD’s policy and required deployment-related health assessments for all reserve component personnel called to active duty for 30 days or more. The policy specifically stated that the assessments were to be done “whether or not the personnel were deploying outside the United States.” Both assessments use a questionnaire designed to help military healthcare providers in identifying health problems and providing needed medical care. The predeployment health assessment is generally administered at the service mobilization site or unit home station before deployment. On February 1, 2002, the Chairman of the Joint Chiefs of Staff issued updated deployment health surveillance procedures. Among other things, these procedures specified that active and reserve component personnel must complete or revalidate the health assessment within 30 days prior to deployment. The procedures also stated that the original completed health assessment forms were to be placed in the military member’s permanent medical record and a copy “immediately forwarded to AMSA.” Both forms include demographic information about the servicemember, member-provided information about the member’s general health, and information about referrals that are issued when service medical providers review the health assessments. The predeployment assessment also includes a final medical disposition that shows whether the member was deployable or not. In September 2003, we reported that DOD did not maintain a complete, centralized database of the active Army and Air Force components’ member health assessments and immunizations. Following our 2003 review, DOD established a deployment health quality assurance program to improve data collection and accuracy. The department’s first annual report documenting issues relating to deployment health assessments was issued in May 2005. In September 2004, we reported similar findings for the reserve component members. We reported that DOD’s ability to effectively manage the health status of its reserve component members is limited because its centralized database has missing and incomplete health records and it has not maintained full visibility over reserve component members with medical problems. For example, the Marine Corps did not send predeployment health assessments to DOD’s database as required, due to unclear guidance and a lack of compliance monitoring. The Air Force has visibility of involuntarily mobilized members with health problems, but lacks visibility of members with health problems who are on voluntary orders. As a result, some Air Force reserve component personnel had medical problems that had not been resolved for up to 18 months, but the full extent of this problem was unknown since the Air Force did not have a mechanism for tracking members who are on voluntary duty orders with medical problems. We made several recommendations regarding improvements in this area and DOD generally concurred with our recommendations and agreed to take actions. Section 731 of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 (NDAA) requires the Secretary of Defense to develop and implement a comprehensive plan to improve medical readiness of members of the Armed Services by focusing on areas such as health status, health surveillance, and accountability for medical readiness. The mandate also required that the Secretary of Defense establish a Joint Medical Readiness Oversight Committee (JMROC) with a specified membership to oversee the development and implementation of a comprehensive medical readiness plan. In response to the act, the first meeting of the JMROC was held in February 2005 during this review. The committee is chaired by the Under Secretary of Defense for Personnel and Readiness and membership includes the Assistant Secretaries of Defense for Reserve Affairs and Health Affairs, the Joint Staff Surgeon, the Chief of the National Guard Bureau, Army Reserve, Navy Reserve, Air Force Reserve and the Commander of the Marine Corps Reserve, as well as the Vice Chiefs of Staff of the Army, Vice Chief of Navy Operations, the Vice Chief of Staff of the Air Force and the Assistant Commandant of the Marine Corps as well as their respective Surgeon Generals and Assistant Secretaries for Manpower and Reserve Affairs, and a representative of the Department of Veterans Affairs. A draft copy of the Comprehensive Medical Readiness Plan which addresses all defense medical issues identified in the act was signed by the Under Secretary of Defense for Personnel and Readiness on June 23, 2005. Officials from the Force Health Protection Directorate in the OSD Office of Health Affairs—which is providing the staff for drafting and overseeing this effort—stated that financial and legislative constraints, which may limit the implementation of the plan, will have to be identified and addressed, and indicators for measuring progress will have to be developed before the plan is finalized. Among other things, the draft plan specifies that DOD: (l) institutionalize the Individual Medical Readiness (IMR) reporting process by developing a DOD instruction for the IMR and requires that this information be provided to commanders to assist them in improving the health status of members of their units; (2) expand and improve the pre- and postdeployment assessment process by refining the predeployment survey to improve consistency with the postdeployment survey and develop periodic postdeployment health reassessments; (3) develop a policy defining the circumstances under which treatment for medical conditions may be provided in theater and circumstances under which medical conditions are to be resolved prior to deployment; and (4) review the results of this GAO study. DOD is unable to determine the extent to which the reserve components are in compliance with routine medical and physical fitness examination requirements primarily due to a lack of OSD guidance, oversight, and incomplete or unreliable compliance data supplied by the components. Although the Office of the Under Secretary of Defense for Personnel and Readiness (OSD/P&R) has the responsibility for overseeing medical and physical fitness policy and processes, this office has not established a management control framework and executed a plan to oversee compliance with routine examinations. For example, OSD/P&R has not provided guidance to the reserve components regarding requirements for the 5-year medical examination and an annual medical certificate. Thus, in the absence of OSD guidance, each reserve component has developed its own implementing policies, resulting in differences in scope, frequency, and administration making it difficult because uniform criteria against which to measure compliance do not exist; however, OSD has provided consistent guidance for dental and physical fitness examinations. DOD’s ability to determine the extent of compliance has been hindered because OSD does not oversee reserve component members’ compliance with the routine physical fitness or medical examination requirements. Furthermore, the data reported at the reserve component level have been incomplete and unreliable for purposes of determining compliance with routine medical and physical fitness examinations, and responsibility for compliance has not been enforced. We found indications of noncompliance during our site visits and reviews of existing audit reports and investigations. OSD’s lack of oversight could negatively impact operational readiness for future deployments, as the number of needed personnel may not be medically and physically fit for active duty. Although OSD/P&R has the responsibility for overseeing medical and physical fitness policy and processes, this office has not established a management control framework and executed a plan that includes issuing guidance to the reserve components on compliance with the requirements for the 5-year medical examination and an annual medical certificate. For example, the statutory requirement for the 5-year medical examination has not been defined by OSD, leaving each reserve component to develop implementing guidance, resulting in differences in scope, frequency, and administration of the examination among the components. In addition, there has not been any OSD implementing guidance regarding the statutory requirement for an annual medical certificate, and so different guidance has been developed by the surgeons’ general offices responsible for each of the six reserve components. Lack of OSD guidance makes oversight difficult to determine because the uniform criteria against which to measure the components’ compliance do not exist. OSD, through the Office of the Assistant Secretary of Defense for Health Affairs, has established a consistent requirement and implementation policy for an annual dental examination. OSD has also established a consistent requirement for a physical fitness examination, although the specific content of the physical fitness examination varies among the components and it is not coordinated with the medical examinations. The requirement for a routine medical examination has been in effect for all active and reserve components since at least 1960. Yet, as of September 2005, OSD has not developed a plan or provided direction to the components on how to implement this requirement. In the absence of OSD guidance, the surgeons general responsible for the four services and six reserve components have each developed their own separate implementing guidance for the current requirement for a 5-year medical examination, resulting in differences in scope, frequency, and administration among the components as illustrated below. Routine medical examinations include assessments in six areas: physical capacity or stamina, upper extremities, hearing and ears, lower extremities, eyes/vision, and psychiatric. For Army active and reserve component members older than age 40, there are additional age-specific screenings such as prostate examination, a prostate-specific antigen test, and a fasting lipid profile that includes testing for total cholesterol, low- density lipoproteins, and high-density lipoproteins. The Department of the Navy conducts routine medical examinations on all Navy and Marine Corps active component and reserve members that include height and weight measurements, blood pressure testing, urinalysis, serology, and mental issues. Those being examined are also questioned about their past and present medical history, including serious illnesses, injuries, chronic conditions, and operations. The Air Force reserve components’ medical examination for nonflyers has been significantly reduced to minimize lost training time due to annual medical requirements. The scope of the current testing exam requirement is essentially limited to brief skin exams for scars and cancer and limited laboratory blood work, and excludes EKGs, cholesterol, lipid panels, depth perception, glaucoma, and mammograms. One question asked on the questionnaire addresses mental status and whether the member has a history of anxiety or depression. In addition to the differing scope, the different implementing guidance across the services has resulted in variations among the services in the frequency and administration of the 5-year medical examinations. For example, Army guidelines require that Selected Reserve members complete a medical examination once every 5 years. During our review, the Navy and Marine Corps personnel were examined at slightly different intervals: every 5 years through age 50, every 2 years through age 60, and annually after age 60. The Air Force is even more different, in that it no longer requires a traditional medical examination physical be completed every 5 years for nonflyers. Instead, members are required to complete an annual Preventive Health Assessment (PHA), the answers to which— combined with the member’s age, gender, health risk factors, medical history, and occupations—will determine the types of screening and laboratory tests required and if the member needs to be seen by a military health care provider. At a minimum, however, Air Force reserve component members are required to have a visit with a military health care provider, or Periodic Health Assessment Monitor (PHAM), at least once every 3 years, while Air National Guard members are required to visit a Health Care Provider (HCP) at least once every 5 years. Thus, differences exist between the two Air Force reserve components. In the absence of any implementing guidance from OSD, guidance for the annual certification of medical condition has been developed by the surgeon general’s offices responsible for each of the six reserve components. Like the 5-year medical examination, the annual certificate of medical condition is prescribed by statute which states that “each member of the Selected Reserve who is not on active duty shall execute and submit annually to the Secretary concerned a certificate of physical condition.” This requirement has been in law since at least 1960 and is especially important for the reserve components, since they are not seen by military health care providers as often as the active duty. The different guidance from each of the services has resulted in differing definitions from each service as to what is involved in the annual medical certificate. For example, Department of Army regulations require that all members of the Army Reserve and Army National Guard certify their medical condition annually on a two-page certification form, where members report physician and dentist visits since their last examination, describe current medical or dental problems, and disclose any medications they are currently taking. Navy and Marine Corps Selected Reserve members complete an Annual Certificate of Physical Condition that provides information including the location of their health and dental records, the dates and purpose or type of their last complete physical and dental examinations, and the date of their last HIV blood test among others. Reservists are also expected to disclose any injury, illness, or disease that occurred within the last 12 months and resulted in hospitalization, or caused them to be absent from work, school, or duty for more than 3 consecutive days; if they have been under a physician’s care or taken prescription medications during the past 12 months; and any physical defects, family issues, or mental problems that would prevent them from being mobilized. The Air Force has combined this annual requirement into its PHA screening process. Within the Air Force Reserve, the PHA process involves all members initially completing a Reserve Component Health Risk Assessment, which was formerly known as the Annual Medical Certificate. In the Air National Guard, the PHA involves all the members initially completing an annual Health History Questions/Interval History, which was formerly known as the Annual Medical Certificate. The annual dental examination is a consistent requirement across the reserve components that was established by DOD policy and provided consistent standards for active duty and Selected Reserve members to improve dental readiness. In 1998, the Office of the Assistant Secretary of Defense for Health Affairs, under the Under Secretary of Defense for Personnel and Readiness, directed that all active duty and Selected Reserve members obtain an annual dental examination so that DOD would have a clear picture of members’ dental readiness and fitness for duty. Although the 1998 directive required all services to provide implementation plans for completing all dental examinations by 2001, Health Affairs recognized that the services were having difficulty identifying both the mechanisms for compliance and the tracking system for documentation, and extended the goal of 90 percent compliance until February 2004. A year and half later, DOD still does not have complete and reliable information on all reserve components’ compliance. According to Army regulation, all soldiers within the Army National Guard are required to have a dental examination on an annual basis. The current annual dental examination requires an assessment of the current state of oral health; risk for future dental disease, including periodontal assessment; and oral cancer screening. Prior to early 2004, the Army reserve components were still conducting only a dental screening. In March 2000, the Navy issued instructions requiring Navy and Marine Corps reservists to undergo an annual dental examination. Currently, both the Air Force Reserve and Air National Guard require annual dental examinations in line with DOD’s requirement. The Air Force Reserve made this a requirement in January 2003, but the Air National Guard did not make it a requirement until September 2004. Prior to these times, the required dental exam interval was once every 3 years for the Air Force Reserve and once every 5 years for the Air National Guard. Although the specific content of the physical fitness examination varies among the components, the requirement for at least an annual physical fitness examination is consistent across the components because it was established by DOD policy which is to be monitored by the Principal Deputy Under Secretary of Defense for Personnel and Readiness, Office of Morale, Welfare, and Recreation. Specifically, the policy requires that all military services and reserve components develop and use physical fitness tests that evaluate aerobic capacity (e.g., a timed run), muscular strength, and muscular endurance (e.g., push-ups, pull-ups, sit-ups), and that all service members be formally evaluated and tested for the record at least annually (unless they are under a medical waiver). The specific content of the physical fitness examination varies among the components because different physical abilities are needed to meet the services’ different missions. The Army Physical Fitness Test (APFT) is a performance test that indicates a member’s ability to perform physically and handle his or her own body weight. The APFT is required annually for the Army National Guard. As of October 2004, the Chief of the Army Reserve required Army reservists to be tested twice a year, as are their active component counterparts. The APFT consists of 2 minutes of push- ups, 2 minutes of sit-ups, and a 2-mile run (the same test is administered to both the active and reserve component). The number of push-ups and sit- ups and the 2-mile run time are based on the soldier’s age range and sex (the physical fitness test required to enter the Army has the same requirements for all ages, but different requirements for gender). All Navy personnel, regardless of age and component (active or reserve), are required to participate semiannually in a Physical Fitness Assessment that includes a Body Composition Assessment and Physical Readiness Test unless medically prohibited from doing so. Body composition is assessed by an initial weight and height screening or an approved circumference technique to estimate body fat percentage. Testing includes a series of physical events designed to evaluate an individual’s flexibility through a sit-reach activity, muscular strength and endurance through curl-ups and push-ups, and aerobic capacity through a 1.5-mile run/walk, or 500-yard or 450-meter swim. Individuals who fail either the Body Composition Assessment or the Physical Readiness Test or both are considered to have failed the entire assessment. The Marine Corps has also developed a Body Composition Program and Physical Fitness Test to assess each Marine’s fitness level. Active component Marines are tested semiannually while Marine Corps Reservists are tested annually. Body composition standards are health- and performance-based limits for body weight and body fat. Physical fitness testing includes pull-ups for males, flexed-arm hang for females, a timed abdominal crunch event, and a timed 3-mile run. These events are designed to test the strength and stamina of the upper body, midsection, and lower body, as well as the cardiovascular system. The Air Force fitness program requires an annual physical assessment to motivate all members to participate in a year-round physical conditioning program, including proper aerobic conditioning, strength/flexibility training, and healthy eating. Fitness assessment results are based on a composite score calculated from results of an aerobic assessment (1.5-mile run), muscular fitness assessment (push-ups and crunches), and body composition measurement (abdominal circumference measurement). Although DOD has directed the military physical fitness programs to complement the health promotion program within OSD’s Office of Health Affairs and senior medical officials have told us that medical and physical fitness go “hand-in-hand,” physical fitness policies are not coordinated with medical fitness policies at the OSD, service, reserve component, or unit levels. Furthermore, DOD did not consider physical fitness a factor for determining the medical deployability of reserve component members prior to deployment to Iraq and Afghanistan, even though we reported in 1994 that several Army reports on Operations Desert Shield and Desert Storm noted fitness-related problems that hindered wartime operations. For example, one report noted that poor fitness contributed to the deaths by heart attack of eight reserve component personnel deployed to the Persian Gulf. OSD does not have a plan to oversee reserve components’ compliance with the routine medical or physical fitness examinations, which hinders DOD’s ability to determine the extent of compliance. For example, OSD does not track reserve component members’ compliance with routine medical examinations. In addition, OSD does not enforce its own directive requiring the services to report on their members’ compliance with physical fitness examinations. Although OSD’s Office of Health Affairs has begun to track medical readiness indicators, it does not have a plan to track compliance with routine medical examinations and does not attempt to track compliance with physical fitness examinations. OSD’s Office of Health Affairs has initiated a process requiring that all reserve components report quarterly the percentage of their members who are in compliance with the following six indicators of medical readiness: dental class I or II; immunizations; medical readiness laboratory tests, such as providing a blood sample; no deployment-limiting conditions; periodic health assessment; and medical equipment, such as eyeglass inserts for face masks. This process continues to evolve as the Office of Health Affairs wrestles with inconsistencies in requirements among the reserve components, especially in regard to the periodic health assessment since each reserve component implements the requirement for a periodic 5-year medical examination differently. Without centralized oversight and management for tracking compliance, DOD’s ability to determine the extent of compliance with routine medical examinations may be impeded. OSD has not enforced its own directive requiring the reserve and active components to report on their members’ compliance with physical fitness examinations by March 2005. Although DOD policy states that physical fitness is a vital element of combat readiness and is essential to the general health and well-being of military personnel, OSD and the reserve components have been lax in reporting compliance with physical fitness examination requirements and do not fully utilize available systems that could report physical fitness status on a servicewide basis. DOD established a reporting requirement for physical fitness in November 2002, in response to recommendations from our prior reports; however, it has not enforced compliance with this new requirement. The new physical fitness policy requires that each military service establish and maintain a data repository that provides baseline statistics and a tracking mechanism that monitors physical fitness and body fat for both the active and reserve components. The policy was developed over the course of many years. In response to a recommendation in our 1994 report, the Under Secretary of Defense for Personnel and Readiness stated that revised DOD guidance would “require the services to provide an annual report assessing their physical fitness and health promotion programs, to include a brief summary on how physically fit and healthy they view their military members, both active and reserve components.” Not only did the original directive fail to require the services to submit an annual report on the status of servicemembers’ physical fitness, but senior military officials in the office responsible for developing these directives told us that no service ever submitted a status report on their physical fitness programs as required by the revised directive. In 1998, we again reported that DOD’s oversight of the physical fitness program was inadequate and that DOD had not enforced the annual reporting requirement. Officials in the Office of Morale, Welfare, and Recreation stated that in response to our report, DOD guidance was again revised in November 2002, to require the services to report annually to the Principal Deputy Under Secretary of Defense for Personnel and Readiness on a number of very specific physical fitness statistics, including the number of personnel tested, the number of personnel who failed the test, and the number placed in remedial training programs. The first report was due to the Principal Deputy Under Secretary of Defense for Personnel and Readiness by the military services by March 31, 2005. However, during our review we were told by officials in the Office of Morale, Welfare, and Recreation that none of the reports had been submitted to the Principal Deputy as required. The Air Force, Navy, and Marine Corps were developing their information during this review. The Army had until March 2007 to report because, according to a signed memorandum by the Principal Deputy Under Secretary of Defense for Personnel and Readiness, the Army is taking steps to report this information as part of the Defense Integrated Military Human Resources System (DIMHRS). Until this reporting requirement is enforced, DOD’s ability to determine compliance with the physical fitness examinations may continue to be hindered. Incomplete and unreliable data at the reserve component level regarding compliance with routine medical and physical fitness examinations have hindered DOD’s ability to determine the extent of the reserve components’ compliance with the examination requirements. Each reserve component employs a tracking system capable of monitoring compliance with medical examinations, but only one reserve component—the Navy Reserve—has data that are reliable for determining compliance with routine medical examinations. Furthermore, even though DOD policy calls for each military service to establish and maintain a physical fitness data repository, no reserve component has demonstrated that its tracking system can report complete and reliable compliance data on physical fitness. Although the reserve components place the responsibility for tracking compliance with medical and physical fitness examinations on the unit commander, the reserve components do not always hold the unit commanders accountable and the unit commanders do not always enforce the compliance of their members. No centralized oversight exists to hold all levels accountable, thus ensuring that all requirements are being met. All of the reserve components are now employing systems that can track compliance with medical examinations, but only one reserve component— the Navy Reserve—has taken the necessary quality assurance steps to ensure the reliability of its data on compliance with routine medical examinations. In contrast, we found that the data captured by the systems used by the Army and the Air Force were unreliable for determining compliance with routine medical examinations. We did not assess the reliability of the data used by the Marine Corps because it is in the process of implementing and testing the use of the Navy’s system. Assessing data for their reliability includes quality assurance steps to consider the completeness and currency of the data, i.e., determining whether there are assurances that all members are included and the information is up to date; quality control measures, such as conducting periodic testing of the data against medical records, to ensure the accuracy and reliability of the data; and examining who is using the data and for what purposes, and how reliable the user thinks the data are. We found that the Navy Reserve had taken such quality assurance steps. For example, the Navy has directed its Readiness Commands to conduct routine inspections to verify medical data accuracy in the Navy Reserve’s Medical Readiness Reporting System (MRRS) and required reserve units to review 10 percent of their medical records for accuracy after each drill weekend. In addition, Navy Reserve units are also required to keep the Commander, Navy Reserve Forces Command informed about medical and dental compliance on a biweekly basis. In contrast, we found that the compliance data on routine medical examinations captured by the Army Medical Protection System (MEDPROS) were unreliable for the purposes of determining compliance with routine medical examinations. MEDPROS was developed in 1998 to track anthrax compliance and has since matured to meet current mobilization requirements. All Army components—active, reserve, and guard—are required to enter members’ medical compliance data into MEDPROS. We found the data captured by this system are unreliable for monitoring compliance with routine requirements for several reasons, including missing data, failure to include data for all Army units, and lack of quality assurance assessments on data content being performed to test the data’s reliability. Until quality control measures are instituted, the Army will not be able to reliably use MEDPROS to track compliance with the requirements for the 5-year medical examination, the annual medical certificate, and the annual dental examination. We also found that the Air National Guard’s Preventive Health Assessment and Individual Medical Readiness (PIMR) system and the Air Force Reserve’s Reserve Component Periodic Health Assessment (RCPHA) system were unreliable for the purposes of determining compliance with routine medical examinations. We found that neither system produces data that are reliable for the purposes of determining compliance with routine medical examinations because: (1) both the Air Force Audit Agency and Air Force Inspection Agency have reported discrepancies in their review of medical records and the data from these systems, and (2) there is a high reliance on unit commands to test and verify the reliability of the data. In addition, during our site reviews, we found medical staff at several commands having difficulty entering large backlogs of medical data, which raised concerns about the timeliness of the data. Often, this backlog took several weeks to resolve and required the assistance of full-time reservists. However, according to program managers and database administrators, the quality of the data, in terms of their completeness and accuracy, ranges from quite good to exceptional when subjected to internal system software checks. Until resources necessary to input and verify the data in a timely manner are provided, the Air Force will not be able to rely on PIMR and RCHPA data to determine compliance with routine medical examination requirements. We did not assess the reliability of the data used by the Marine Corps because it is in the process of implementing and testing the use of Navy’s system. According to a Marine Corps official, once the new system is fully implemented, the Marine Corps will have the same oversight capability over medical compliance that the Navy Reserve currently has. Even though DOD policy calls for each military service to establish and maintain a physical fitness data repository, no reserve component has a tracking system that can report complete and reliable data on compliance with physical fitness examinations on a componentwide basis. In fact, the Army Reserve, the Army National Guard, and the Marine Corps Reserve do not have systems that are designed to track compliance with physical fitness examinations on a componentwide basis. The Navy Reserve, the Air National Guard, and Air Force Reserve each have systems that can track compliance with physical fitness examinations on a componentwide basis. The Navy Reserve system, however, may not be producing reliable data at this time. Further, we have concerns regarding the reliability of the data produced by the Air National Guard and the Air Force Reserve because such data are not reviewed or validated on a regular basis. The Army does not report physical fitness on a componentwide basis. According to a Department of Army memo, dated April 19, 2004, and confirmed through our discussions with Army and OSD officials, physical fitness and body composition data will eventually be tracked in DIMHRS, in which the Army is the first component to participate. Until DIMHRS is used, the Army will be unable to report complete and reliable data on componentwide compliance with the physical fitness examination requirements. According to Army Reserve officials, physical fitness data can be tracked in the regional level application software database, but the information may not be updated by the units in a timely or consistent manner. This information is then updated in the Total Army Personnel database, which updates the Individual Training and Readiness System. In the Army National Guard, the states may use the personnel database to record the scores and dates of physical fitness examinations, but not consistently. The Army’s first report on the status of its physical fitness compliance for all its components will be due March 31, 2007, because the Office of the Under Secretary of Defense for Personnel and Readiness granted the Army a 2-year extension for its requirement to report on the physical fitness status of all members (active, reserve, and guard). The data in this report, if complete and reliable, could enable DOD to determine the Army’s compliance with the physical fitness examination requirement. According to the 2004 Department of the Army Memo, if DIMHRS is not on line by September 2006, the Army will manually report these data. Compliance with physical fitness examination requirements is tracked at headquarters level for the Navy Reserve, but we found that the Navy is unable to report complete and reliable compliance data. The Navy requires all commands to report their physical fitness assessment data, including physical readiness test results, through the Physical Readiness Information Management System (PRIMS). However, we found the data generated by this system to be unreliable because, according to a Navy Official, there are about 2,000 duplicate records that need to be purged and about 25 percent of the Body Composition Assessment scores have not been reported by unit commanders. Until internal controls are established to eliminate duplication and ensure completeness of data, the Navy will be unable to report complete and reliable data on componentwide compliance with the physical fitness examination requirement. The Navy submitted its annual report on physical fitness, due March 31, 2005, to DOD 3 months late, on July 8, 2005. According to a DOD official, the Navy did not request an extension or provide an explanation for the late submission. Because the data in this report came from the PRIMS system that we found to be unreliable, we do not believe that DOD can reliably use the information in the report to determine the Navy’s compliance with the physical fitness examination requirement. The Marine Corps is unable to report complete and reliable data on compliance with the physical fitness examination because, in contrast to the Navy, the Marine Corps does not have a dedicated physical fitness reporting system. Instead, the Marine Corps requires unit commanding officers to record physical fitness scores in unit diaries, personnel records, and the Marine Corps Total Force System, a Marine Corps-wide personnel system. Units that input data into this system are responsible for reviewing the data and certifying that they are correct. However, a Marine Corps official indicated that the data are assumed to be correct when transmitted to higher commands, but no steps are taken to verify accuracy of the data. As of August 2005, the Marine Corps had provided DOD with a draft report addressing calendar year 2004 physical fitness scores. According to a DOD official, the Marine Corps did not request an extension or provide an explanation for the late draft submission. Further, as of September 2005, the Marine Corps had not responded to our official request for the annual physical fitness report. Without an ongoing quality assurance program to consistently and continuously ensure the completeness and reliability of the data in the Marine Corps Total Force System, we did not rely on the data in the draft Marine Corps Physical Fitness Report provided to DOD. Although both the Air Force Reserve and Air National Guard each have a dedicated system to track the physical fitness status of their members, we found quality assurance procedures lacking, possibly leading to incomplete and unreliable data with which to track physical fitness compliance. The Air Force Reserve’s software system Program—the Air Force Fitness Management System (AFFMS)—only tracks fitness program results on a current basis and only retains data entered from 2004 forward. However, quality assurance procedures are not followed. For example, there are delays in entering data; compliance of individual units is only reviewed if there is a question; and headquarters does not routinely assess members’ currency. This program relies on a fitness program manager within each unit command to monitor program metrics. According to an AFFMS system official, the only true way of determining the reliability of the data in this system is to compare these data with the data in the respective member’s personnel files, and this has not been done. The Air National Guard (ANG) tracking system for compliance with physical fitness examinations is ANG’s Fitness Age and was first implemented in late 2003, although many ANG units lagged in their use of Fitness Age until after April 2004. ANG’s Fitness Age database only reflects calendar year information as of a specific point in time, and does not track or measure performance based on a running 12-month period. The ANG Fitness Program requires an assessment on all ANG members once per calendar year. According to ANG officials, most physical fitness testing is performed within the last few months of the calendar year. Because the data are cumulative, the only time that physical fitness information can be assessed for all members taking the test is at the end of the calendar year. In other words, most reservists would appear out of compliance until they take their annual exam even though they are probably still within their 1- year window for testing. Furthermore, information on the number of reservists not tested at all or who are overdue is not captured by the ANG Fitness Age database. According to an ANG official, the responsibility for managing the physical fitness program rests with the respective ANG installation’s command. The respective ANG installations (unit commands) have visibility over their respective “overdue” members. However, ANG headquarters lacks sufficient oversight to assess compliance. Without ongoing quality assurance programs to consistently and continuously ensure the completeness and reliability of the data in the Air National Guard and Air Force Reserve systems, we did not rely on the data in these systems. In general, throughout the reserve components, the individual members are responsible for maintaining their physical and medical fitness and the unit commanders are responsible for ensuring members’ compliance with medical and physical fitness examinations; however, the reserve components do not always hold the unit commanders accountable and the unit commanders do not always enforce the members’ compliance. Accountability for compliance is fragmented at various levels of command. No centralized oversight exists to hold all levels accountable ensuring that all requirements are being met. Individual members are responsible for attending all scheduled examinations and assessments, seeking timely medical advice when necessary, reporting changes in their medical health on the annual medical certificate, and successfully completing the requirements of the physical fitness examinations. False statements may result in reassignment, discharge, or other disciplinary action. Unit commanders are responsible for implementing any administrative and command provisions for examinations, informing members of the examination requirements, establishing training programs for physical fitness, taking actions against reserve members who fail to comply with the requirements, and reporting the current medical and dental status of reservists through the applicable tracking systems, and they are ultimately responsible for the accuracy of medical and physical fitness information relied on by higher commands. However, reserve components do not always hold the unit commanders accountable for these responsibilities and the unit commanders we interviewed expressed concern about the many competing responsibilities they have, such as meeting training requirements, and how they must prioritize the use of their limited resources. One unit commander also expressed concern about enforcing medical and physical fitness policies if it meant losing a “good soldier” who otherwise performs his duties well. Without oversight and accountability at the OSD and respective service and reserve component levels, unit commanders may not have the incentive or resources to fully enforce the medical and physical fitness examination requirements and compliance may suffer. Although DOD can not determine the extent of reserve components’ compliance with routine medical and physical fitness examinations, we found indications of noncompliance during our site visits and in our reviews of existing audit reports and investigations. For example, a limited review of medical files at one Army National Guard and one Army Reserve location, data from a Navy report, test results of two units in a Marine Corps battalion, and data from a review conducted by the Air Force Audit Agency indicate some noncompliance at all components with the routine medical examination, annual medical certificate, annual dental examination, and annual physical fitness examination. A review of available medical files at one Army National Guard and one Army Reserve location, data from a Navy report, test results of two units in a Marine Corps battalion, and data from a review conducted by the Air Force Audit Agency indicate some noncompliance with the routine medical examination and the annual medical certificate at all components. For example, in April 2005 we conducted a review of 39 medical files at an Army National Guard unit that was deployed to Iraq in 2003 for 1 year. We found that 13 members were not in compliance with the routine medical examination at the time of our review. Further, while 36 members were in compliance with the annual medical certificate at the time of our review, only 3 members were in compliance with the annual medical certificate prior to the unit being alerted of their most recent mobilization date for deploying to Iraq. According to the commander of this unit, there are a number of actions that need to be accomplished during weekend drills, and with limited time and resources available, completing routine medical requirements is low on the long list of priorities. In addition, during June 2005, we reviewed 175 medical files of an Army Reserve unit that deployed to Afghanistan in 2003 for 10-month deployment. We found that all but 2 members were in compliance with the 5-year medical examination. While 150 members were in compliance with the annual medical certificate at the time of our review, not a single member was in compliance with the annual medical certificate prior to the unit receiving alert orders of their mobilization. Furthermore, many of the soldiers that we spoke with during our review stated that they were unfamiliar with the annual medical certificate. In addition, a February 2005 Army Inspector General Report noted that virtually all reserve component leaders they contacted during their review expressed frustration with their inability to maintain the medical deployability status of their soldiers using the annual medical certificate process. Leaders noted the certificate only reflects what a soldier is willing to share. Often the only medical personnel available to review and sign the certificate is a unit medic, who can do little more than ask if the data are correct. In July 2005, the Navy reported that 96.8 percent of reserve members had completed the routine 5-year medical examination and 94 percent of reserve members had completed the annual medical certificate. These high rates are due, in part, to the high priority placed on medical and dental compliance throughout the Navy Reserve. Although the Marine Corps Reserve does not currently have componentwide automated information on medical compliance, it does conduct a periodic site inspection called the Mobilization Operational Readiness Deployment Test (MORDT). We reviewed the results of the MORDT at two units of a Selected Reserve Battalion that had been mobilized. The first unit test results we reviewed indicated that 98 percent of the reservists had completed a routine physical examination within 5 years, and 90 percent had submitted annual health certifications. The second unit test results also indicated that 98 percent of the reservists had completed a routine annual physical within 5 years, and 88 percent had submitted annual health certifications. According to Marine Corps Reserve officials, all Marine Corps Selected Reserve units are subjected to an unannounced test prior to mobilization to ensure the unit can deploy. The Air Force Audit Agency (AFAA) recently concluded its review of the Service’s Individual Deployment Process, during which it found significant problems with the Guard’s and Reserve’s medical records. Ten Air National Guard and Air Force Reserve installations included in a sample of 20 installations designed to be able to produce estimates for all Air Force personnel who were eligible to be deployed during the 90-day window between June 1, 2004, and August 31, 2004, were in compliance with medical requirements such as, but not limited to, annual medical assessments and dental examinations. The AFAA reviewed the medical records and associated documentation for accuracy and completeness. Based on AFAA’s review and analysis of 14,121 eligible Guard and Reserve members combined, about 13 percent were found to have medical discrepancies in their medical records. At 2 of the unit commands included in AFAA’s review that we also visited in our review, command officials said that they agreed with the AFAA’s findings and were taking corrective action. Indications of noncompliance with the dental examination requirement were also present at all the reserve components. For example, as previously noted, in April 2005, we conducted a review of 39 medical files of an Army National Guard unit; of these, 33 were not in compliance with the annual dental examination at the time of our review. Furthermore, 32 members were not in compliance with the annual dental examination prior to alert. In June 2005, we visited an Army Reserve unit to conduct a review of 175 medical files. Although only 13 members were not in compliance with the annual dental examination at the time of our review, over 130 members were not in compliance with the dental examination prior to alert. Other evidence indicates that compliance with dental requirements has been a particular matter of concern for the Army reserve components. According to a February 2005 Army Inspector General Report, there are examples of reserve component service members with multiple tooth extractions at nearly every mobilization station. Furthermore, in cases where members presented dental records during mobilization, often the only entries are dated to the members’ basic training and initial exams and procedures. We found a stark example of what happens during mobilization when a member’s dental status is allowed to remain below Class I or II. In one unit we visited, we interviewed a member who had 30 teeth extracted prior to deployment. According to the member, although dental screenings were conducted annually, indicating that he was a dental class III he took no follow-up action to correct his dental problems because he had no dental insurance and correcting the problem was not a priority. At the time this servicemember was being mobilized, a Department of the Army memo dated December 6, 2002, stated that soldiers assigned to designated units scheduled to deploy within 75 days of mobilization and identified as being within dental class III or IV have necessary dental treatment initiated to bring them up to dental classification II, the deployment standard. Although we did not review individual medical and dental records at Navy and Marine Corps Reserve sites we visited, we did review specific reports to assess whether these components monitored members’ dental status. We found that the Navy Reserve compliance appears to be improving. For example, in early July 2005, the Navy reported that 88.6 percent of selected reservists were in a Dental Class I or II category, an increase over the 69 percent reported in the Dental Class I or II category in December 2002. We also reviewed MORDT results for two Marine Corps units during a site visit to a Marine Corps Reserve Battalion that had been mobilized. We found that test results for the first unit indicated that 85 percent were categorized as Dental Class I or II while 77 percent in the second unit were categorized as Dental Class I or II. Analysis provided by the AFAA from its review, mentioned earlier, indicated that about 13 percent of the Air National Guard and Air Reserve members who were eligible to be deployed between June 1, 2004, and August 31, 2004, were found to have discrepancies in their dental records. In addition to the AFAA review, in 2004 the Air Force Inspection Agency conducted health services inspections and found discrepancies in dental readiness classifications in 49 percent of the 37 installations reviewed. As with the other examination requirements, we also found indications of noncompliance with the physical fitness examination requirement at all six components. During our review in April 2005 we also reviewed 29 physical fitness files of the Army National Guard unit that deployed to Iraq. Of the 29 physical fitness files we reviewed, only 18 members showed compliance with the physical fitness examination requirement during 2004. Of these 18 members, 11 passed the physical fitness test and 7 failed. According to the unit commander, some soldiers possess skills that are greatly needed for unit continuity and strength and usually outweigh the ramifications of having to separate the member due to physical fitness test failures. We also conducted a review in June 2005 of 227 physical fitness files of the Army Reserve unit that deployed to Afghanistan. Of the 227 physical fitness files we reviewed, only 117 members showed evidence of compliance with the physical fitness examination requirement during 2005. Of these 117 members, 89 passed the physical fitness test and 16 failed. In group discussions held at this time, members stated that there were no repercussions for failing the physical fitness test. As previously reported in our 1994 report, we also found that physical fitness scores had been inappropriately changed and servicemembers were not discharged even after repeated test failures, primarily because commanders placed more emphasis on maintaining unit strength. While visiting a Navy Reserve Activity, we obtained a single unit’s physical fitness test results to ensure data were properly maintained in the Physical Readiness Information Management System. However, when we asked the Navy Personnel Command to provide a copy of the required physical fitness report, we learned the report would be submitted to OSD late. According to a Navy official, the Navy had identified over 2,000 duplicate record entries and estimated that nearly 25 percent of the body fat scores were missing from the data totals. In its report to OSD, the Navy reported that it had not mandated separation processing for individuals who failed the physical fitness test since May 2001. During a visit to a Marine Corps Reserve Center, we also obtained information that indicated individual Marine Corps reservists’ physical fitness scores were recorded in the Marine Corps Total Force System. Subsequent to our visit, however, we learned that the Marine Corps also provided an unofficial “draft” physical fitness report to the OSD after the deadline. In order to review Marine Corps physical fitness statistical data, we requested a copy of the report on April 6, 2005. As of October 2005, the Marine Corps had not responded to our request. The Air Force did not meet OSD’s required due date in submitting its first annual report on its assessment of the physical fitness, body fat, and health promotion program for the active service, the Air National Guard, and the Air Force Reserve. The Air Force did not submit its annual report until May 4, 2005. Based on the data provided by the Air Force for the Air National Guard and the Air Force Reserve, only 83 percent of the force members were tested, with 13.2 percent of those tested falling into the poor category. However, the Air Force’s assessment of one of its reserve component’s statistical data may not be entirely correct. In its reported statistical information of the numbers of members tested, those members testing in the poor category are higher than those numbers directly reported by the Air National Guard to the Air Force Medical Support Agency, which consolidated the respective components’ data and in turn submitted the overall report to the Assistant Secretary of Defense for Force Management Policy. In addition, as discussed earlier, we were unable to determine that the data used from the Air National Guard and Air Force Reserve databases that generated these data are reliable. DOD does not have complete visibility over the health status of reserve component members after they are called to duty and is unable to determine the extent of care provided to those members deployed with preexisting medical conditions. Despite the existence of various sources of medical information, DOD has incomplete visibility over members’ health status when called to active duty, primarily because the reserve components vary in their ability to systematically identify, track, and report members’ medical deployability and the DOD-wide centralized database cannot provide complete information—both of which hinder DOD’s ability to accurately determine what forces remain for future deployments. In addition, DOD is unable to determine the extent to which reserve component members received care for preexisting medical conditions while deployed; however, evidence suggests that reserve component members did deploy with preexisting medical conditions that could not be adequately addressed in theater and that some of these conditions may have stressed in-theater medical capabilities. Although DOD has some visibility over reserve component members after they are called to active duty or mobilized, this visibility is limited despite several potential sources of information. For example, the reserve components vary in their ability to systematically identify, track, and report information about members’ medical deployability, which limits DOD’s visibility over the health status of members. In addition, although medical information is captured on predeployment forms for all reserve component members and entered into a DOD-wide centralized database during mobilization, some data are still missing and information regarding the reasons why members were found nondeployable is not captured in a way that can be easily searched through the database. Moreover, medical referral data captured on the predeployment forms provide some insight into the care that members may have required during mobilization, but this care is not always related to why a member was determined to be medically nondeployable. Some data on the medical reasons why Army Guard and Reserve members were not deployed after being activated can be obtained from an analysis of the Army’s medical holdover database, but this information is insufficient to provide DOD with visibility over members’ health status since it is only gathered on the numbers of Army reserve components held prior to deployment and this population is diminishing due to positive changes in Army’s medical holdover policy. DOD’s limited visibility over reserve component members’ health status when they are called to active duty could affect planning for future deployments because the pool of available Guard and Reserve component members from which to fill requirements for certain skills and grades is dwindling, and members’ health status is deteriorating following deployments. The reserve components vary in their ability to systematically identify, track, and report members’ medical deployability, and only three reserve components—the Navy Reserve, the Air Force Reserve, and Air National Guard—can currently identify and track members with both temporary and permanent conditions that limit medical deployability. This limited visibility over reserve component members’ medical deployability status hinders DOD’s ability to identify the pool of available Guard and Reserve members who are available for deployment. The Navy Reserve uses the Medical Readiness Reporting System (MRRS) to track and report the status of reservists classified as Temporarily Not Physically Qualified for duty because of an illness, injury, or other medical condition that should be resolved within 6 months. This system is also used to track and report the status of reservists, classified as Not Physically Qualified for duty, with more serious medical conditions such as cancer or heart disease that will not be resolved in 6 months and may lead to a medical review or board retention decision. As the Marine Corps Reserve continues to fully implement the Navy’s Medical Readiness Reporting System, it too will have these same capabilities. Both the Air National Guard and the Air Force Reserve’s medical tracking systems— PIMR and RCPHA, respectively—can identify and track members with specific medical conditions that limit deployment; however, neither system can distinguish between temporary and permanent limitations. In addition, the Air Force has a system called Military Personnel Data System that captures information on all medical profiles and can report specific queries on specific categories such as temporary and permanent conditions. Although the Army tracks active, guard, and reserve members with medical profiles that limit deployment through their medical tracking system, MEDPROS, the active Army and Army Reserve do not presently track members with temporary medical conditions that render them nondeployable. However, the Army National Guard is in the process of implementing a system, called the Medical Non-Deployable Tracking Module (MND-TM), that will track its members who have a temporary or permanent medical condition that renders them nondeployable. Army National Guard officials expect all states to use this system for its members by July 2007. Until all six reserve components are able to systematically identify and track members’ medical deployability status, DOD will not have the most accurate information to centrally manage estimating the remaining available pool of guard and reserve members for future deployments. DOD has some visibility over reserve component members’ medical status during mobilization through the centralized DOD-wide database operated by the Army Medical Surveillance Activity (AMSA). All active and reserve component members are required to complete a medical predeployment form to document the member’s medical deployability status, which is then forwarded to AMSA for entry into the database. Thus, information can be obtained from the centralized database on reserve and active component members who were determined nondeployable during mobilization due to medical reasons. The member also completes a health assessment form after deployment. However, we have noted in previous reports that the centralized database has missing and incomplete forms. In our last report issued in September 2004, we found that for the required forms from reserve component members (1) not all of the forms had reached AMSA, (2) only some of the forms that had reached AMSA had been entered into the database, and (3) not all of the forms contained complete information, thus limiting analysis. We also noted that while the components were not in complete compliance with the requirement to submit pre- and postdeployment assessments, the number of assessments had grown significantly. During this review, we found that DOD has continued to make progress toward collecting the pre- and postdeployment forms. According to AMSA officials, the database contained about 140,000 assessments at the end of 1999, grew to about 1 million assessments by May 2003, almost doubled at 1,960,125 by June 2004, and was at 2,241,177 by June 2005. Further, DOD has established a centralized deployment health quality assurance program to improve data collection and accuracy. Each service has also developed a deployment health quality assurance program. The department’s first annual report, documenting, among other things, issues relating to predeployment health assessments, was issued in May 2005. The DOD quality assurance program includes (1) periodic site visits jointly conducted with staff from the Office of the Assistant Secretary for Health Affairs and staff from the military services to assess compliance with the deployment health requirements, (2) periodic reports from the services on their quality assurance programs, and (3) periodic reports from AMSA on health assessment data maintained in the centralized database. The report noted that centralized management of quality assurance had improved accountability of the preassessment forms on the part of the services. For this review, we obtained predeployment information from AMSA officials based on over 1 million active and reserve component predeployment health assessment forms collected between November 2001 and June 2005. More than 5 percent of the reserve component and more than 6 percent of the active component predeployment health assessment forms did not record the servicemember’s deployability status. Of the approximately 94 percent of forms that were complete, nearly the same percent of reserve component and active component members were found medically deployable, 94 percent of the reserve component members compared to 96 percent of the active component members. Unfortunately, the forms do not always capture information regarding the reasons why members were found medically nondeployable or do not capture that information in a systematic way. For example, although the form has an entry for a narrative explanation to explain why a member is medically nondeployable, an AMSA official informed us that these explanations are often incomplete or not decipherable, and can not be easily categorized. Furthermore, although the forms do provide space for the member’s deployment destination, this information is not always filled in because, according to AMSA officials, deployment destination is often not known by the member or is classified. Therefore, the data presented here are for all worldwide deployments, including the United States, and could change after the initial deployment, thus preventing an analysis by operation. As seen in table 1, the total nondeployable rate for all six reserve components was more than 5 percent, while table 2 shows the total nondeployable rate for the active component was almost 4 percent. While the Army Reserve had the highest percentage of nondeployable servicemembers among the reserve components, at about 9 percent, the active Army had the highest percentage of nondeployable servicemembers among the active components, at almost 6 percent. According to medical officials, some of these nondeployable personnel, such as those who had suffered multiple heart attacks, should have been discharged prior to the time that they received their mobilization orders. Others had temporary conditions, such as broken bones and pregnancies, that did not warrant medical discharges but made the servicemember nondeployable at the time of the assessment. The predeployment health assessment forms capture information on specific medical referrals given to members by the reviewing health care official during mobilization, which is useful in gaining some insight into the care that members may have required during mobilization. These data are not as helpful in determining why a member was not medically deployable since they are not always related to why a member was determined to be nondeployable. According to a senior OSD official, although any indicated referral may be related to a disposition of nondeployable, this is not always the case. Three common scenarios illustrate this relationship: (1) a member is found to be clearly nondeployable from a medical standpoint, and no referral is made; (2) a member is referred for further evaluation for a condition for which deployability is questionable, in which case there is a direct relation between the referral and the determination of deployable or nondeployable; or (3) a member is found to be deployable, but has a minor medical issue for which the health provider provides a referral for treatment. According to a senior OSD official, the last scenario is a fairly uncommon reason for a referral. Examples might include a referral for a routine preventive test, such as a Pap test in a gynecological clinic. The Pap test is a desired preventive medical test, but depending on the date and result of the last Pap exam and the individual’s personal history and risk factors, it is not always necessary to perform one prior to deployment. More than 50,000 referrals were made on the predeployment health assessments from November 2001 through June 2005 for both the active and reserve components. As shown in table 3, of the 21,000 forms with referrals for reserve component members, the referral rate averaged more than 5 percent. As shown in table 4, of the 24,633 forms with referrals for their active duty counterparts, the referral rate was about 4 percent. Within the reserve components, the Army Reserve had the highest referral rate at nearly 8 percent, while the Air National Guard and Air Force Reserve had the lowest rates, both at less than 1 percent. There are 18 categories of referrals that can be checked on the predeployment form, of which 1 is “other” and does not provide any further detail. As seen in figure 1, the top 3 medical referrals for the reserve components were “other,” “dental,” and “eye,” whereas the top 3 referrals for active components were “other,” “dental,” and “orthopedics.” The rate of medical referrals for the reserve components was almost 40 percent and for the active components was almost 50 percent. Although the AMSA referral data do provide some insight into the medical care required during mobilization, the referral data are not detailed enough to determine the type of medical referral or determine the reason for nondeployment. The Army’s medical holdover database, a module within the Medical Operational Data System (MODS), does provide DOD with a snapshot of data about the number of Army National Guard and Reserve members who were not deployed after being called to active duty because of medical problems and the medical reasons why they were not deployed after being activated. Although all of the services may keep reserve component members on active duty if they incur an injury in the line of duty following deployment, only the Army has held reserve component members in need of medical care at military treatment facilities prior to deployment. These servicemembers are referred to as the medical holdover population. Because of the large numbers of activated Army National Guard and Army Reserve members placed in medical holdover by the Army in the early part of Operation Iraqi Freedom, the Army Office of the Surgeon General created a module in an existing database to track them. We examined the Army medical holdover data to obtain information about the possible reasons why servicemembers were found to be medically nondeployable. However, the data cannot provide complete visibility over members’ health status because the population receiving medical care from the Army prior to deployment is diminishing due to changes in Army’s medical holdover policy. Further, until January 2005, MODS was not used consistently by all case managers responsible for servicemembers in medical holdover. Between December 2002 and October 2003, 4,850 activated Army reserve component members were found medically nondeployable and kept on active duty until their medical problems had been resolved and they were returned to full duty or until they had been referred to a medical board and discharged from the Army. In October 2003, the Army changed its policy to allow the demobilization of personnel who were found to be nondeployable within the first 25 days of activation. In accordance with this policy, reserve component servicemembers identified in the first 25 days as having a medical condition that renders the individual nondeployable may be released from active duty immediately. As a result of this policy change, the Army was able to demobilize reserve component members who were found to be nondeployable within the first 25 days of their mobilization. The change also reduced the inflow of reserve component members on active duty with medical problems who were identified during the predeployment health assessment process. As of August 11, 2005, only 860 reserve component members were in a medical holdover status as a result of a medical condition found prior to deployment. As shown in figure 2, the most common medical condition that has prevented a reserve component member from deployment is orthopedic in nature—accounting for 56 percent of the 860 Army National Guard and Army Reserve members who were found medically nondeployable and placed in a medical holdover status—followed by internal medicine at 16 percent, and neurological problems at 8 percent. Despite the more specific information about medical status that can be obtained by reviewing these medical holdover data, the data are fairly new and limited to those held at medical treatment centers. Although senior military officials at various levels of command told us that the health status of reserve component members did not affect deployment schedules, the extent to which unit commanders have had to find replacement members to fill in for members who were medically unqualified upon alert, the reasons why, and how, or if, this impacted planning of operations in Iraq and Afghanistan are unknown. However, DOD’s lack of visibility over reserve component members’ health status when they are called to active duty could affect planning for future deployments as the demand for troops for the Global War on Terrorism continues. The Army has had to transfer reserve component personnel from nonmobilized units to mobilized units to meet mission requirements. For example, the Army Inspector General reported in February 2005 that with increasing frequency, Army units identified for alert and mobilization had previously provided members to other units. The report noted that frequently more than half of a deploying unit’s personnel had been transferred into the unit to meet personnel requirements. This “ripple effect” is occurring across the Army reserve force, and each subsequent mobilization requires more and more personnel transfers to meet personnel requirements. The need for these personnel transfers is largely due to an outdated Cold War strategy that planned to use the reserve forces as a later deploying force and therefore did not give them full resources. As more units are used for this “cross-leveling”, it becomes even more important that the Army have good visibility over the health status of the remaining reserve component members. In addition, as shown in table 5, the health status has declined for active and reserve components after returning from deployment as shown by data from the pre- and postdeployment health assessments. The Army National Guard and Army Reserve had the highest percentage of servicemembers indicating their health as fair to poor on the postdeployment health assessment. As the pace of operations for the reserve forces continues to be high and the health status of returning members is diminished, it becomes even more important that DOD has good visibility over the availability of remaining units. Improved visibility and tracking of the health status and medical deployability of these members is a key component in the calculation of the members available for planning future deployments. DOD cannot determine the extent to which reserve component members received care for preexisting medical conditions while deployed in theater because DOD has not determined what preexisting medical conditions may be allowed into specific theaters of operations. The purpose of examining members and properly screening them at the mobilization stations is to help ensure that members are medically and physically fit to deploy and do not have any condition that would adversely affect the mission. As noted in DOD guidance, fitness specifically includes the ability to accomplish the task and duties unique to a particular operation, and the ability to tolerate the environmental and operational conditions of the deployed location. Specific medical deployment criteria for proper screening are essential for determining preexisting medical conditions that can not be adequately addressed in theater and could stress in theater medical capabilities. While evidence suggests that members did deploy with preexisting conditions, the total impact of this is unknown. Developing and updating medical criteria for a specific theater of operations are the responsibilities of the combatant commands—for Operation Enduring Freedom and Operation Iraqi Freedom this is U.S. Central Command (CENTCOM). The CENTCOM medical deployment criteria have been evolving over the course of these operations. CENTCOM has updated this guidance six times throughout these operations to include more specific guidance to the theater of operations; the last update was issued in January 2005. During the initial mobilizations for these operations, the services were dependent on CENTCOM general deployment criteria issued in May 2001, which did not identify medical conditions that would render a member medically unfit for these operations. In the absence of specific guidance early on during the operations, the services relied upon their own medical deployment criteria. For the Army, specific criteria did not exist until February 2005. The original CENTCOM deployment criteria made a general statement that all personnel must be assessed and determined to be medically and psychologically fit for worldwide deployment to a combat theater and that the in-theater health infrastructure provides only limited medical care. Not until May 2004 did CENTCOM update its deployment criteria to include more specific guidance. This updated guidance stated that servicemembers who have existing medical conditions may deploy if all of the following conditions were met: (1) an unexpected worsening of the condition is not likely to have a medically grave outcome; (2) the condition is stable; and (3) any required ongoing health care or medications must be immediately available in theater in the military health system, and have no special handling, storage, or other requirements, such as electrical power. The criteria provided a list of conditions that may preclude medical clearance for DOD civilians and contractors (including current heart failure, history of heat stroke, and uncontrolled hypertension); however, according to CENTCOM officials, this list of conditions did not apply to servicemembers because they were already covered by service-specific guidelines. The most recent CENTCOM deployment criteria applicable to all servicemembers and DOD civilians and contractors were issued in January 2005, and update theater-specific immunization requirements and provide more detailed guidance on contact lens wear, among other things. As these policies are developed, the combatant command is to provide them to the services, which are then responsible for determining how they implement the screening requirements in terms of screening their deploying forces, including activated reservists. Because DOD has not determined what preexisting conditions may be allowed into a specific theater of operations, it has not known what preexisting conditions to track. As noted, the medical deployment criteria for the current theater of operations have been evolving, but specific medical deployment criteria have not been developed for other potential theaters of operation. However, some preexisting medical conditions may be common to all theaters of operation. DOD has not determined this. Further, although DOD has a number of systems for tracking medical conditions in theater, the current databases have not been modified to capture data on known preexisting conditions for this specific operation. For example, the Joint Medical Workstation (JMeWS) provides medical treatment status and medical surveillance information, as well as tracks and reports patient location within a theater of operations and during evacuation from frontline medical units to stateside medical treatment facilities. The U.S. Transportation Command (TRANSCOM) utilizes the TRANSCOM Regulating Command and Control Evacuation System (TRAC2ES) to document patient movements, such as medical evacuations. The Joint Patient Tracking Application (JPTA) was initially designed for use within Landstuhl Regional Medical Center in Germany as a way to manage Operations Enduring Freedom and Iraqi Freedom patients. In 2004, the services were directed by the Assistant Secretary of Defense for Health Affairs to implement JPTA at military treatment facilities in theater and the continental United States to improve patient tracking and management. The Disease Nonbattle Injury (DNBI) rates for the services in Operations Enduring Freedom and Iraqi Freedom are tracked in the DNBI database by the Air Force Institute for Operational Health. We did not evaluate these systems since they do not distinguish care provided for preexisting medical conditions. Although DOD does not systematically develop or report information about the extent of care that was provided in theater to reserve component members for preexisting medical conditions, senior military medical officials who served in theater have provided examples of reserve component members who were deployed with preexisting medical conditions that could not be adequately addressed in theater. Some officials told us that such treatments strained in theater medical capabilities and infrastructure. According to a senior military official in the surgeon’s office of the commander in chief of the U.S. Central Command (CENTCOM), there were many instances of individuals, from all services, who deployed into the Iraq and Afghanistan theater of operations with conditions for which they should have been considered nondeployable. Also, medical officials from both the Army and Navy cited examples of conditions seen in theater that should have rendered members nondeployable. Among the examples cited were members with a history of heart attack, severe asthmatics (the desert conditions were not suitable for these members), severe hypertension, a woman 4 months into chemotherapy for breast cancer, and a man who had received a kidney transplant 2 weeks prior to deploying. Other examples included cases involving members deployed with sleep apnea requiring machines that are run by electricity, even though electricity was either unavailable or unpredictable. Another soldier, we were told, who arrived in theater was diabetic and required an insulin pump for treatment. We were also told of a number of psychiatric patients who were suffering from conditions such as bipolar disorder who should not have been in the desert because the medications that they were taking caused them to sweat profusely. One Air Force Reserve medical official who served in theater preparing members to be medically evacuated estimated that of the approximate 2,000 reservists she helped to evacuate, 10 percent being evacuated were due to preexisting conditions such as diabetes and heart problems, with the most common condition being diabetes. The commander of an Army Guard unit deployed to Iraq told us about a member who had deployed with a preexisting knee problem for which he had to be returned to the United States to correct. The issue was eventually resolved and the member was allowed to redeploy with his unit. According to a September 2004 Air National Guard Surgeon General memorandum, unacceptable dental health should preclude a member from deploying under any circumstances because dental resources do not exist in theater. However, the Air National Guard’s Surgeon General has noted that dental emergencies are historically and currently the most common preventable reason for loss of manpower in the wartime theater. In addition, the Air Force’s Air Surgeon Chief of Medical Services Directorate commented on January 17, 2003, in response to a case involving an Air National Guard member who had been sent into theater with an obvious major preexisting dental condition, that it is unreasonable to expect deployed doctors and dentists to perform remedial procedures and provide care that should have been accomplished at home because it takes too much time away from treating injured and ill in theater, and it results in lost man hours for the gaining unit that it needs to accomplish its war-fighting requirements. In our small group discussions with Army National Guard, one servicemember said that he was told that he would receive dental care in theater, although this care was never provided. At one Air National Guard unit command we visited, officials informed us of a member who was mobilized and subsequently deployed with preexisting dental problems in late 2003, because (1) the dental condition was not disclosed by the member and (2) the unit command did not have a current dental exam in his medical records to prove otherwise. The member would not have been deployed had his true dental condition been initially identified, but he received substantial dental work while deployed. According to a unit command official, the member was subsequently returned to his unit command because his dental costs and related work downtime were excessive. In addition to a lack of specific guidance from CENTCOM to the services early in the operations, military medical officials told us other reasons why members may have arrived in theater with preexisting medical conditions. First, military officials stated that in some cases members did not disclose their preexisting medical conditions because they wanted to serve their country. A Navy official, for example, stated that a Navy officer with hypertension did not disclose his medical condition in order to deploy to Iraq to support Operation Iraqi Freedom. Because the officer’s medical condition worsened in Iraq, the Navy had to return him to his home unit and find a replacement to fill his position. We were also told of members who arrived in theater with preexisting conditions with the expectation that they would be taken care of while they were there. For example, a senior medical official stated that one servicemember arrived in theater with one kidney and in need of dialysis, which was not available in theater. Early in operations several servicemembers with hernias were deployed with the expectation that the surgery would be conducted in theater. It is important to have up-to-date medical criteria specific to a theater of operations to alert members to changing condition in theater or new information on vaccinations, for example. Developing and updating medical criteria for a specific theater of operations is the responsibility of the commander in chief of the combatant command—in this case, CENTCOM. As these polices are completed and updated, the combatant command is to provide them to the services, who are then responsible for determining how they implement the requirements in terms of screening their deploying forces including activated reservists. The findings we present in this report are not new. In the aftermath of the first Persian Gulf War, a number of DOD and GAO studies were issued that identified problems with guard and reserve personnel being medically and physically fit for duty. DOD agreed with many of the studies’ findings and recommendations but never developed a plan with goals, time frames, and measurable results to improve visibility over reserve component members’ health status. At times, Congress has stepped in and directed DOD to make a number of improvements, especially for quality assurance and tracking of health assessment data collected before and after a member’s deployment. Congress recently directed OSD to develop and implement a comprehensive plan to improve management of the health status of the reserve component. The importance of such a plan has become even more important in the current environment, where the pool of guard and reserve members with the right skills from which to fill requirements for DOD’s overseas and domestic commitments is dwindling. Further, many of DOD’s personnel policies, including its medical policies, are outdated, as they are based on Cold War strategy that allowed the reserve force more time to mobilize before deployment. Now the reserve force deploys with the active force and is expected to be medically and physically fit when called to duty. The lack of oversight of reserve members’ health status, however, does not appear to be unique to the reserve component. Oversight, as seen in the area of enforcing DOD’s reporting requirement on the status of physical fitness for both the active and reserve components, has not taken place. No repercussions exist if a service does not provide this report on time, nor are there any deadlines for the annual report to be submitted to OSD. OUSD/P&R has the authority to set medical and physical fitness policy and processes to oversee this area; however, OUSD/P&R has not taken action to exercise its authority to address these long-standing problems. As DOD proceeds to develop a comprehensive plan for improving management over the health status of the reserve components in response to the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005, we recommend six actions. To have visibility over reserve components’ compliance with routine medical and physical fitness examinations, we recommend that the Secretary of Defense direct the Under Secretary for Personnel and Readiness, in concert with the Assistant Secretary for Health Affairs and the Principal Deputy to the Under Secretary, to establish a management control framework and execute a plan for issuing guidance, establishing quality assurance for data reliability, and tracking compliance with routine medical and physical fitness examinations; and direct the Under Secretary for Personnel and Readiness, in concert with the Principle Deputy who oversees the Office of Morale, Welfare, and Recreation, to take steps to enforce the service reporting requirement on the status of members’ physical fitness in conjunction with the actions taken in the first recommendation. To improve DOD’s visibility over reserve components’ health status after they are called to duty, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness, in concert with the Assistant Secretary of Health Affairs, to also oversee the development of the reserve components’ tracking systems to identify and track members’ temporary and permanent medical conditions that limit deployability; and direct the Under Secretary of Defense for Personnel and Readiness, in concert with the Assistant Secretary of Health Affairs, to modify the medical predeployment forms to better capture reasons for nondeployment and medical referrals. To help prevent the deployment of reserve component members with preexisting medical conditions that could adversely affect the mission and strain resources in theater, and to provide visibility over those members deployed with preexisting conditions for which treatment can be provided in theater, we recommend that the Secretary of Defense: direct the Chairman of the Joint Chief of Staff to determine what preexisting medical conditions should not be allowed into specific theaters of operations, especially during the initial stages of the operation, and to take steps to ensure that each service component consistently utilizes these as criteria for determining the medical deployability of its reserve component members during mobilization; and direct the Chairman of the Joint Chief of Staff, in concert with the service secretaries, to explore using existing tracking systems to track those who have treatable preexisting medical conditions in theater. In written comments on a draft of this report, DOD did not concur with our first and fourth recommendations, partially concurred with our fifth recommendation, and concurred with our second, third, and sixth recommendations. DOD did not concur with our recommendation that it establish a management control framework and execute a plan for issuing guidance, establishing quality assurance for data reliability, and tracking compliance with routine medical examinations. DOD did not state that it disagreed with our findings; however, DOD stated that it had initiatives underway that addressed our recommendation. DOD further noted that because policies, programs, and instructions are already in place or in process, it did not see the need for any additional action. We disagree with DOD’s conclusion because, based on our review, we do not believe that DOD’s initiatives are far enough along to dismiss further action, and we continue to believe that our recommendation has merit. We agree that the initiatives DOD cited in its written comments are positive steps toward correcting the identified problems, but management and planning remain a concern. We have not seen enough evidence to agree that DOD has put in place a management control framework that will enforce holding all responsible levels accountable, ensuring that all routine medical requirements are being met, and that complete and reliable data are being entered into the appropriate tracking systems. As noted in our report, the problems with determining the health status of the reserve force were revealed during Operations Desert Shield and Desert Storm, and in the decade that has passed since then DOD has made little progress to correct the identified problems. As a result, in 2004, Congress directed DOD to establish a Joint Medical Readiness Oversight Committee to oversee the development and implementation of a comprehensive medical readiness plan. As also noted in our report, the committee held its first meeting in February 2005, and a plan to improve medical readiness was being developed during this review. We do not believe that a committee can be held accountable for ensuring that such actions take place. Ultimately, the Under Secretary of Defense for Personnel and Readiness, in concert with the Assistant Secretary for Health Affairs, are accountable for enforcing the requirements for routine medical examinations. Moreover, DOD stated that it has established a new quality assurance program that monitors electronic data with validation through medical record reviews of a wide range of force health protection measures. We did not find this to be true during our review. With the exception of the Navy Reserve, the reserve components do not monitor electronic data of routine medical examinations with validation through medical record reviews. Further, we found the data in the reserve components’ tracking systems to be unreliable for purposes of determining compliance with routine medical examinations. As noted in our report, compliance with these routine medical examinations is the first step toward determining who is medically fit or ready for duty. DOD stated that its compliance- monitoring Individual Medical Readiness program regularly reports the overall medical readiness status for each servicemember. However, we found that the Individual Medical Readiness program’s outcomes are derived from data in the reserve components’ tracking systems, which we have found to be unreliable, with the exception of the Navy Reserve, for the purposes of determining compliance with routine medical examinations. DOD stated that its Individual Medical Readiness program’s data are being incorporated into overall unit readiness status reports, providing visibility of reserve component medical readiness throughout the line command structure. We believe that until top management at DOD ensures that complete and reliable data on routine medical examinations are being entered into its tracking systems, DOD and Congress will continue to have a false picture of medical readiness for the reserve components. We believe that our first recommendation still has merit. DOD concurred with our recommendation that DOD take steps to enforce the services’ reporting requirement on the status of their members’ physical fitness. DOD stated that DOD instruction 1308.3, dated November 5, 2002, among other things, requires the active and reserve components to provide an annual report to the Principal Deputy of the OUSD/P&R not later than March 31. DOD stated that the Air Force, the Navy, and the Marine Corps have submitted their reports. DOD noted that exceptions to the reporting requirement for the Air Force and the Army had been approved. However, during our review we were told that none of the reports had been submitted to the Principal Deputy as required. We raised concerns in this report about the data reliability of the tracking systems for physical fitness. We found that the reserve components are unable to report complete and reliable data on compliance with routine physical fitness examinations on a componentwide basis due to incomplete and unreliable data. Just as we found with routine medical examinations, we also found that DOD lacked quality assurance of the data on compliance with physical fitness examinations in its tracking systems. We do not know what data reliability issues DOD will cite in its annual reports on physical fitness. We note that the responsible office for physical fitness oversight, the Office of Morale, Welfare, and Recreation, does not participate in the Joint Medical Readiness Oversight Committee that is directed to oversee improvements in medical readiness, nor are we aware of any DOD plans to include improvements in the oversight of physical fitness in its comprehensive medical readiness plan. Therefore, we have expanded our first recommendation to include routine physical fitness examinations in the actions to be addressed. DOD concurred with our recommendation that DOD oversee the development of the reserve components’ tracking systems to identify and track members’ temporary and permanent medical conditions that limit deployability. DOD stated that it is already actively adapting existing systems, and in some cases creating new ones, that can be used to track the medical status of active and reserve members, to include those known conditions that could limit an individual’s deployability. DOD noted that it continues to pursue better integration between medical and personnel data systems to improve visibility regarding deployment-limiting medical conditions, whether temporary or permanent, but the overall effectiveness will continue to be limited by lack of access to civilian medical records of reserve component members. DOD did not concur with our recommendation that DOD modify the medical predeployment form to better capture reasons for nondeployment and medical referrals. DOD stated that the best sources of accurate information about what medical reasons kept service members from deploying are the permanent medical records. This may be the case, but we continue to believe our recommendation has merit because DOD has no way to systematically analyze the information to determine why servicemembers are medically nondeployable. Because the predeployment form is used to document whether a servicemember is deployable, this existing form could be modified to better capture the reasons for determining why a servicemember is determined nondeployable. Although the form has an entry for a narrative explanation to state why a member is medically nondeployable, AMSA officials informed us that these explanations are often not decipherable, incomplete, and can not be easily categorized. DOD also stated that the existing predeployment form already includes a list of the most common referral categories to simplify the documentation process for the health care provider. In addition, DOD also noted that data from the forms are captured electronically and are readily available to monitor for trends in referral patterns, among other things. We do not believe that any meaningful analysis for referrals can be determined from these forms because we found that the top medical referral category for the reserve and active components was “other”. This heavy use of the category “other” does not provide any insight as to what medical care a member is receiving after being called to duty. Given that the rate of medical referrals for the reserve components was almost 40 percent and for the active components was almost 50 percent, we continue to believe that DOD should modify the predeployment form to better capture reasons for nondeployment and medical referrals. DOD partially concurred with our recommendation that DOD determine what preexisting medical conditions should be allowed into a specific theater of operations, especially during the initial stages of operations, and take steps to consistently utilize these criteria for determining medical deployability. DOD stated that certain conditions clearly should render a member nondeployable, and the services have made strides in defining these conditions and incorporating them into their applicable policies and procedures. But DOD also noted that due to the ever-changing nature of a theater of operations and the inexact nature of medicine, a list of nondeployable preexisting conditions will never be fully comprehensive or fully enforceable. We agree that a list of nondeployable preexisting medical conditions can never be fully comprehensive; however, we still believe DOD could establish a list of what preexisting medical conditions should be allowed into specific theaters of operations, especially during the initial stages of operations, so that in future deployments DOD would not experience situations such as those that occurred with members being deployed into Iraq who clearly had preexisting conditions that should have prevented their deployment. DOD concurred with our recommendation that DOD explore using existing tracking systems to track those who have treatable preexisting medical conditions in theater. DOD noted that refinements to medical tracking system are ongoing. We wish to note that before DOD’s tracking systems can be used to track those who have treatable preexisting medical conditions in theater, DOD must determine what preexisting medical conditions should be allowed into a specific theater of operations as called for in our fifth recommendation. DOD noted in its overall comments that the reserve and active forces use many of the same reporting tools within each service and face the same basic challenges in ensuring data quality. DOD stated that where tracking systems are shared, the reserve components depend on the active components to develop and fund those systems, and that priority for deployment of large systems has historically been given to the active component. DOD also pointed out that our report indicates that the health status of members deteriorates with multiple deployments and that the data we used are self-reported and should be taken with great caution and in the proper context. We used the self-reported data from postdeployment health assessments to help demonstrate the importance of good visibility over the reserve forces. We noted that the demand for reserve personnel, especially within the Army components, continues, and the pool of reserve members used to fill requirements is dwindling. Further, the health status of returning reserve and guard members is not as good as it was before deployment as our analysis of the pre- and postdeployment health assessments showed. Therefore, it becomes even more important that DOD have good visibility over the health status of remaining reserve force to help determine what is left for future deployments. DOD’s comments are reprinted in their entirety in appendix II. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; the Chairman of the Joint Chiefs of Staff; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions concerning this report, please contact me at (202) 512-5559 or stewartd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To assess the Department of Defense’s (DOD) ability to determine the reserve components’ compliance with routine medical and physical fitness examinations, we reviewed federal statutes and Office of the Secretary of Defense (OSD) applicable directives and instructions to identify and understand the roles and responsibilities of the offices within DOD for management of the health status of the reserve components. We discussed these statutes and guidance with senior officials in the Office of the Under Secretary of Defense for Personnel and Readiness. We reviewed and discussed service policies and regulations for medical and physical fitness with military officials within the service surgeons’ general offices and other service headquarters’ officials responsible for physical fitness in the service personnel and operations functions. We also reviewed and discussed reserve component policies and guidance for medical and physical fitness examinations with officials within the reserve component surgeons’ general offices and other reserve component officials responsible for physical fitness in the respective reserve component personnel and operations functions. We interviewed cognizant officials involved with policy development, administration, tracking, and reporting on compliance with medical and physical fitness examinations from the following offices or commands: Office of the Secretary of Defense Assistant Secretary of Defense for Health Affairs, Deployment Health Assistant Secretary of Defense for Reserve Affairs; and Principal Deputy Under Secretary of Defense for Personnel and Readiness, the Office of Morale, Welfare, and Recreation. Assistant Secretary of the Army, Manpower and Reserve Affairs; U.S. Army Office of the Surgeon General and Commanding General, Army U.S. Army Reserve Command, Fort McPherson, Georgia; National Guard Bureau; Army National Guard; First U.S. Army, Fort Gillem, Georgia; U.S. Army Forces Command, Fort McPherson, Georgia; Army Fitness School, Ft. Benning, Georgia; Fifth U.S. Army, Fort Sam Houston, Texas; U.S. Army Medical Command, Fort Sam Houston, Texas; U.S. Army Dental Command, Fort Sam Houston, Texas; Army Audit Agency; and MEDPROS Program Office. Assistant Secretary of the Navy, Manpower and Reserve Affairs; Office of the Chief of Navy Operations; Office of the Chief of Navy Reserve; Bureau of Medicine and Surgery; Commander Navy Reserve Forces Command, New Orleans, Louisiana; and Navy Personnel Command, Millington, Tennessee. U.S. Marine Corps Health Services, Headquarters; U.S. Marine Corps Manpower and Reserve Affairs, Headquarters, Quantico, Virginia; and Marine Forces Reserve, Headquarters, New Orleans, Louisiana. Department of the Air Force, Headquarters; Assistant Secretary of the Air Force for Manpower and Reserve Affairs; Office of Air Force Reserve, Headquarters; Air Force Reserve Command, Robins Air Force Base, Georgia; National Guard Bureau; Air National Guard, Headquarters; Air National Guard Readiness Center; and Air Reserve Personnel Center, Denver, Colorado. We also conducted medical and physical fitness file reviews with an Army National Guard unit from the Mid-Atlantic region and an Army Reserve unit from the Mid-west region. We chose units that had deployed for Operations Enduring Freedom or Iraqi Freedom. During these visits we collected and analyzed information from available medical and personnel files to assess the reserve component members’ compliance with routine medical and physical fitness examinations. We also documented difficulties the units had in ensuring that all members complied with medical and physical fitness examinations. Finally, during the site visits, we conducted group discussions with unit members regarding their experience with routine examination requirements. To gain a better understanding of how the components collect data about their members’ compliance with routine medical and dental examinations and physical fitness assessments, we assessed the reliability of data produced by several services’ databases. Assessing the reliability of the services’ data included consideration of issues such as the completeness and currency of the data from the respective database system’s program managers, administrators, and contractors; assurances that all members are included and the information is up to date; and examination of who is using the data and for what purposes, and the users’ assessment of reliability. We also examined whether the data tracked through the services’ systems was subjected to quality control measures, such as conducting periodic testing of the data against medical records, to ensure the accuracy and reliability of the data. In addition, we reviewed existing documentation related to the data sources and interviewed knowledgeable agency officials about the data. Overall, the reserve components’ data we assessed regarding compliance with routine medical and dental examinations and fitness assessments did not accurately reflect the total population of service members, had limited data quality assurance, and were unreliable for the purposes of this report; however, we determined that the Navy Reserve’s medical data were sufficiently reliable for our purposes. Data from the Navy Reserve’s Medical Readiness Reporting System were found reliable because Readiness Commands conduct inspections that include examining the data for accuracy, Medical Department Representatives verify 10 percent of the updated medical records after each weekend drill, and the data are reported to the Commander, Navy Reserve Forces Command biweekly. Further, we did not assess the reliability of the Marine Corps Reserve’s medical data because the Marine Corps was in the process of changing from the Shipboard Automated Medical System, a stand-alone non-Web-based system, to the Navy Reserve’s system. All reserve components’ physical fitness data that we reviewed had missing or incomplete information, had limited data quality controls, or did not accurately reflect the total population of service members due to limited access to the database. Therefore, we determined the data to be unreliable for the purposes of assisting us in determining reserve component members’ compliance with physical fitness examinations. To assess DOD’s visibility over reserve components’ health status after they are called to duty and the care, if any, provided to those deployed with preexisting conditions, we collected and analyzed information from a variety of sources throughout DOD. We interviewed officials at the six reserve component headquarters and officials responsible for mobilizing the reserve components. We also observed the mobilization of Army National Guard and Army Reserve members at Fort Bliss, Texas, to obtain information on their health status during this process. We obtained and analyzed data provided on medical deployability from the DOD-wide centralized database on pre- and postdeployment health assessments, maintained at the Army Medical Surveillance Activity, and discussed available data with these officials. We also obtained and analyzed data on Army servicemembers who were held at mobilization stations for medical reasons from the Army’s medical holdover database (Medical Operational Data System). Based on our review of databases we used, we determined that the DOD-provided data were reliable for the purposes of this report. To address the extent of medical care provided in theater for preexisting medical conditions, we reviewed the Joint Chiefs of Staff procedures for Deployment Health Surveillance and Readiness and information provided by the U.S. Central Command Surgeon’s General office regarding medical deployment criteria for Operations Enduring Freedom and Iraqi Freedom. We also collected and reviewed the services’ medical instructions, memoranda, policies, and medical data. We reviewed several databases for relevance regarding collecting in theater medical data on preexisting conditions. Specifically, we obtained information and discussed the following databases: Joint Medical Workstation, the U.S. Transportation Command Regulating Command and Control Evacuation System, the Joint Patient Tracking Application, and the Air Force Institute for Operational Health Disease Nonbattle Injury database. However, we did not identify any databases used to collect information on members that may have had preexisting conditions when deployed. We also interviewed military medical officials who had served in theater to obtain information on preexisting conditions of reserve component members while deployed. In addition to those offices and commands previously listed, we discussed reserve component medical deployment policies, medical and physical fitness policies and instructions, and data regarding medical and physical fitness issues with responsible officials from: Joint Chiefs of Staff, J-4 (Logistics), Medical Readiness Division; U.S. Transportation Command, Scott AFB, Illinois; U.S. Central Command, MacDill, AFB, Florida; and Army Medical Surveillance Activity. U.S. Army Office of the Surgeon General and Commanding General, Army U.S. Army Center for Health Promotion and Preventive Medicine-Europe; Army Reserve Unit, Mid-west region; Walter Reed Army Medical Center; and Soldier Readiness Processing, Medical Operations, Fort Bliss, Texas. Navy Reserve Readiness Command Southwest, California; Navy and Marine Corps Reserve Center, California; and Navy Branch Medical Clinic, Virginia. Marine Corps Mobilization Command, Kansas City, Missouri; and 4th Combat Engineer Battalion, Maryland. Air Force Institute for Operational Health; 142nd Fighter Wing Air National Guard, Portland International Airport, 163rd Air Refueling Wing Air National Guard, March Air Reserve Base, 349th Air Mobility Wing U.S. Air Force Reserve, Travis Air Force Base, 452nd Air Mobility Wing U.S. Air Force Reserve, March Air Reserve Base, California. We reviewed Air Force audit and inspection reports. We interviewed officials with the Air Force Audit Agency regarding its report on the Air Force’s Individual Deployment Process to obtain a better understanding of the report’s scope and methodology to assess reserve components’ compliance with medical and dental requirements. We assessed the reliability of the Air Force Audit Agency’s analyses by (1) reviewing relevant documentation of their analyses, and (2) interviewing knowledgeable officials about the audit work and analyses. We determined the analyses were sufficiently reliable to use as one of the sources of evidence describing the extent of discrepancies in Air Force medical and dental records. We also reviewed the Air Force Inspection Agency’s Health Services reports and its annual analysis reports for calendar year 2004. We also found DOD’s Army Medical Surveillance Activity (AMSA) database and the Army’s Medical Operational Data System (MODS) to be sufficiently reliable for the purposes of our report due to their data quality controls and currency. In addition, through our review of existing information about the systems and the resulting data and through discussions with cognizant agency officials, we found the data sufficiently reliable for the purposes of this report. We interviewed the Chief of AMSA. We discussed the information in the DOD-wide centralized health assessment database and obtained selected data from all the reserve and active component members’ pre- and postdeployment health assessments that were completed from November 2001 through June 2005. Assessments became mandatory for all mobilized reserve component members on October 25, 2001. The data we obtained contained predeployment health assessment records for 383,449 reserve component members and 627,200 for active members. We analyzed the data that we obtained to determine the categories of medical referrals and deployability status. We also analyzed data on the self-reported general health of the reserve component members and compared the data from predeployment assessments with the data from postdeployment assessments. All of our analyses compared data across the reserve components to look for differences or trends. Further, we reviewed the Army’s medical holdover data in MODS and found them reliable for our reporting purposes. The Office of the Army Surgeon General uses MODS to monitor and track the medical holdover population. The intended use of this system is for the MEDCOM and other command elements to track active and reserve component servicemembers in outpatient medical treatment, while still on active duty status. We conducted our review from October 2004 through September 2005 in accordance with generally accepted government auditing standards. In addition to the contact named above, Brenda S. Farrell, Assistant Director; James Bancroft, Larry Bridges, Renee S. Brown, Sara Hackley, Kenya Jones, Ron La Due Lake, Karen Kemper, Julia Matta, Jen Popovic, and Nicole Volchko. Defense Health Care: Improvements Needed in Occupational and Environmental Health Surveillance during Deployment to Address Immediate and Longstanding Health Issues. GAO-05-632. Washington, D.C.: July 19, 2005. Reserve Forces: An Integrated Plan is Needed to Address Army Reserve Personnel and Equipment Shortages. GAO-05-660. Washington, D.C.: July 12, 2005. Defense Health Care: Force Health Protection and Surveillance Policy Compliance Was Mixed, but Appears Better for Recent Deployments. GAO-05-120. Washington, D.C.: November 12, 2004. Military Personnel: DOD Needs to Address Long-term Reserve Force Availability and Related Mobilization and Demobilization Issues. GAO- 04-1031. Washington, D.C.: September 15, 2004. Defense Health Care: DOD Needs to Improve Force Health Protection and Surveillance Processes. GAO-04-158T. Washington, D.C.: October 16, 2003. Defense Health Care: Quality Assurance Process Needed to Improve Force Health Protection and Surveillance. GAO-03-1041. Washington, D.C.: September 19, 2003. Military Personnel: DOD Needs More Data to Address Financial and Health Care Issues Affecting Reservists. GAO-03-1004. Washington, D.C.: September 10, 2003. Military Personnel: DOD Actions Needed to Improve the Efficiency of Mobilizations for Reserve Forces. GAO-03-921. Washington, D.C.: August 21, 2003. Defense Health Care: Army Has Not Consistently Assessed the Health Status of Early- Deploying Reservists. GAO-03-997T. Washington, D.C.: July 9, 2003. Defense Health Care: Army Needs to Assess the Health Status of All Early-Deploying Reservists. GAO-03-437. Washington, D.C.: April 15, 2003. Defense Health Care: Most Reservists Have Civilian Health Coverage but More Assistance Is Needed When TRICARE is Used. GAO-02-829. Washington, D.C.: September 6, 2002. VA and Defense Health Care: Military Medical Surveillance Policies in Place, but Implementation Challenges Remain. GAO-02-478T. Washington, D.C.: February 27, 2002. Gender Issues: Improved Guidance and Oversight Are Needed to Ensure Validity and Equity of Fitness Standards. GAO/NSIAD-99-9. Washington, D.C.: November 17, 1998. Defense Health Care: Medical Surveillance Improved Since Gulf War, but Mixed Results in Bosnia. GAO/NSIAD-97-136. Washington, D.C.: May 13, 1997. Reserve Forces: DOD Policies Do Not Ensure That Personnel Meet Medical and Physical Fitness Standards. GAO/NSIAD-94-36. Washington, D.C.: March 23, 1994. Operation Desert Storm: War Highlights Need to Address Problem of Nondeployable Personnel. GAO/NSIAD-92-208. Washington, D.C.: August 31, 1992. Operation Desert Storm: Full Army Medical Capability Not Achieved. GAO/NSIAD-92-175. Washington, D.C.: August 18, 1992. National Guard: Peacetime Training Did Not Adequately Prepare Combat Brigades for Gulf War. GAO/NSIAD-91-263. Washington, D.C.: September 24, 1991.
The Department of Defense's (DOD) operations in time of war or national emergency depend on sizeable reserve force involvement and DOD expects future use of the reserve force to remain high. Operational readiness depends on healthy and fit personnel. Long-standing problems have been identified with reserve members not being in proper medical or physical condition. Drilling members in the reserve force by law are required to have a medical exam every 5 years and an annual certificate of their medical status. Also, DOD policies require an annual dental exam and an annual evaluation of physical fitness. Compliance with these routine requirements is the first step in determining who is fit for duty. Public Law 108-375 required GAO to study DOD's management of the health status of reserve members activated for Operations Enduring Freedom and Iraqi Freedom. GAO assessed DOD's (1) ability to determine reserve force compliance with routine exams, and (2) visibility over reserve members' health status after they are called to duty and the care, if any, provided to those deployed with preexisting conditions. DOD is unable to determine the extent to which the reserve force complied with routine examinations due to lack of complete or reliable data. Although each reserve component employs a tracking system capable of monitoring compliance with medical exams, only one component has taken the necessary quality assurance steps to ensure the reliability of its data. While the Office of the Under Secretary of Defense for Personnel and Readiness has the responsibility for overseeing medical and physical fitness policy and processes, it has not established a management control framework and executed a plan to oversee compliance with routine examinations. Specifically, this office has not enforced holding all responsible levels accountable, ensuring that all requirements are being met, and that complete and reliable data are being entered into the appropriate tracking system. For example, this office has not enforced its own requirement for the services to report on the components' physical fitness status. Without complete and reliable data, DOD is not in a sound position to provide the Secretary of Defense or Congress assurances that the reserve force is medically and physically fit when called to active duty. DOD has only limited visibility over the health status of reserve members after they are called to duty and is unable to determine the extent of care provided to those members deployed with preexisting medical conditions despite the existence of various sources of medical information. The components collect various types of medical data, but vary in their ability to systematically identify, track, and report information on those with temporary and permanent conditions that may limit deployability. In addition, medical information is captured on predeployment forms for all members and entered into a DOD-wide centralized database. GAO has previously reported that the database has missing and incomplete health data, and DOD is working to correct this through its quality assurance program. GAO found during this review that DOD has continued to make progress entering the data from the forms into the database, but the data are still incomplete and the reasons why members are determined medically nondeployable are not captured in a way that is easily discernable. While the Under Secretary of Defense continues to have responsibility for overseeing the medical and physical fitness of reserve members after they are called to duty, the combatant commanders, under the Joint Chief of Staff, have this responsibility for the theater. DOD is unable to determine the care provided to those deployed with preexisting medical conditions because DOD has not determined what preexisting conditions may be allowed into a specific theater and, thus, does not know what conditions to track. Evidence GAO developed suggests that members are deployed into theater with preexisting conditions, such as diabetes, heart problems, and cancer. The impact of those who are not medically and physically fit for duty could be significant for future deployments as the pool of reserve members from which to fill requirements is dwindling and those who have deployed are not in as good health as they were before deployment.
As of June 2009, SBA fully addressed requirements for 13 of 26 provisions of the Act; partially addressed 8; and took no action on 5 that are not applicable at this time (see table 1). For the 13 provisions SBA fully addressed, the agency’s actions included putting in place a secondary facility in Sacramento, California to process loans during times when the main facility in Fort Worth, Texas is unavailable, making improvements to DCMS to track and follow up with applicants, and expanding its disaster reserve staff from about 300 to more than 2,000 individuals. Furthermore, according to SBA and our review, 5 provisions require no action by SBA at this time because they are discretionary or additional appropriations are needed before SBA can satisfy the Act’s requirements. SBA permitted to mke economic injry disaster lo to nonprofit. SBA must ensure it disaster assnce progr re coordinted to the mximm extent prcticable with FEMA progr. Better public reness of disaster decltion, ppliction period, nd cretion of rketing nd otrech pln. SBA must condct dy looking t the contency etween ndrd operting procedre nd regtion of the Disaster Lon Progrm. SBA increased lomont from $10,000 to $14,000 withot requiring collterl. SBA authorize privte contrctor to process disaster lo nd coordinte effort with IRS to expedite lon processing. SBA must develop, implement, or mintin centrlized informtion tem to trck nd follow p with disaster lopplicnt. SBA i authorized to increase the deferment period of lo, but the deferment my not exceed 4 ye. SBA must pt in plce econdry fcility for processing disaster lo in case the primry fcility i ilable. SBA cn not require the orrower to pny non-mortized mont for the firt 5 ye fter repyment egin. SBA i authorized to mke economic injry disaster lo in cas of ice torm nd lizzrd. SBA must develop nd implement jor disaster repone plnd condct disaster imtion exercit least once every 2 ye. SBA musassign n individual the disaster plnning reponilitie nd report to Congress. SBA hold ensure tht the ner of fll-time equivlent ODA employee not fewer thn 800 nd in the disaster cdre not fewer thn 1,000. SBA must develop, implement, or mintin comprehenive written disaster repone plnd pdte the plnnually. SBA must develop long-term pl to ecre sufficient office ce to ccommodte n increased workforce in time of disaster. SBA my not rely olely on the lopplicnt’ us as jor rce of employment prior to the disaster to qualify for disaster lo eyond the crrent tory limit. Mximm disaster lomont increased from $1.5 to $2 million. SBA may uarantee any urety aaint loss on a bid, payment, performance, or ancillary bond on any work order or contract that at the time of the bond execution doe not exceed $5 million. If the Preident declre jor disaster, SBA my declre eligiility for dditionl disaster assnce. SBA permitted to mke economic injry disaster lo to eligile ll business concern locted nywhere in the US (inclding oide the disaster re) when the SBA declre eligiility for dditionl disaster assnce. SBA must eablind implement Privte Disaster Assnce Progrm. SBA my guantee timely pyment of principnd interet on privte disaster lossued to eligile ll business nd homeowner within n eligile disaster re. SBA must eablin Immedite Disaster Assnce Progrm to provide immedite ll dollr lo throgh privte lender. SBA must eablin Expedited Disaster Assnce Business Lon Progrm. SBA i llowed to intitte progrm to refinnce Glf Coast disaster lo resulting form Hrricne Ktrin, Rit, or Wilm p to mont no greter thn the originl lon. SBA mussubmit report to Congress on disaster assnce. Addressed (initil or ongoing) or dedline met Prtilly ddressed or ome dedline met Not ddressed or missed dedline Not pplicable ecause no ction i needed to e tken SBA t thi time, de to proviion’ dicretionry nre. The Act requires SBA to issue regulations for these provisions. SBA has not yet issued an updated DRP. While SBA has taken some steps toward implementing the Act, the agency still needs to take additional steps to completely address 8 provisions. According to SBA officials, the agency has not yet completely addressed some provisions that require new regulations because to do so, the agency must make extensive changes to current programs or implement new programs––such as the Immediate and Expedited Disaster Assistance Programs––to satisfy requirements of the Act. These programs, which require participation of private lenders, would be designed to provide businesses with access to short-term loans while they are waiting for long- term assistance. Moreover, as required by the Act, SBA has not issued an update of its comprehensive DRP that reflects recent changes resulting from the Act’s requirements, as well as SBA’s own reform efforts. Delays in updates to the DRP limit the agency’s ability to adequately prepare for and respond to disasters. Also, SBA has not fully addressed the requirement for providing region-specific marketing and outreach and ensuring the information is made available to SBDCs and other local resources. We consistently heard from regional entities, such as SBDCs and emergency management groups, about the need for more up-front information on SBA’s Disaster Loan Program and their expected roles and responsibilities in disaster response efforts. By taking such actions, SBA could leverage the efforts and capacity of SBDCs, as well as state and local emergency management agencies, and ensure that it and they will be better prepared for future events, especially in disaster-prone areas. Furthermore, the Act established multiple new reporting requirements and while SBA has addressed some of these, the agency has failed to comply with the Act and issue a first annual report on disaster assistance––which was due in November 2008. Specifically, the Act requires that SBA report annually on the total number of SBA disaster staff, major changes to the Disaster Loan Program (such as changes to technology or staff responsibilities), a description of the number and dollar amount of disaster loans made during the year, and SBA’s plans for preparing and responding to possible future disasters. Failure to produce annual reports on schedule can lead to a lack of transparency on the agency’s progress in reforming the program. Additionally, 9 provisions set forth in the Act are subject to deadlines, which the agency has had limited success in meeting. The agency also has not developed a plan with expected time frames for addressing the remaining requirements. SBA’s not providing reports to Congress and not having an implementation plan in place for addressing the remaining requirements can lead to a lack of transparency about the agency’s Disaster Loan Program, program improvement, and capacity to reform the program, as well as its ability to adequately prepare for and respond to disasters. SBA’s initial response following the 2008 Midwest floods and Hurricane Ike aligned with major components of its DRP, such as infrastructure, human capital, information technology, and communications. Additionally, individuals to whom we spoke affected by both disasters considered the agency’s overall performance somewhat positive, but believed the disaster loan process could be improved. In May 2008, floods devastated 85 counties in Iowa (one of several states affected) and in September 2008, Hurricane Ike devastated 50 counties in Texas. SBA and SBDC officials, state and local representatives, private- entity officials, and business owners in Iowa and Texas told us that in the days immediately following the disasters, SBA’s Office of Disaster Assistance staff reported to the affected areas and began providing needed disaster assistance. These individuals also said that SBA staff provided outreach and public information about its Disaster Loan Program; distributed application information; assigned knowledgeable customer service representatives to various Disaster and Business Recovery Centers; and assisted in the initial application process by answering questions, providing guidance, and offering one-on-one help––as outlined in SBA’s DRP. In addition, our review of SBA’s 2008 Disaster Loan Program Customer Satisfaction Survey also showed that respondents were somewhat satisfied with the assistance SBA provided during other recent disasters. However, both the individuals we interviewed and survey results indicated areas for improvement and opportunities to increase satisfaction with SBA’s disaster loan process. For example, individuals we interviewed and survey responses pointed to concerns about the amount of paperwork required to complete SBA’s disaster loan application and the timeliness of loan disbursements. Also, some business owners said they had to provide copies of 3 years of federal income tax returns, although they had signed an Internal Revenue Service (IRS) form 8821—Tax Information Authorization—which allows SBA to get tax return information directly from IRS. To address these concerns, the individuals we interviewed suggested several changes to the program, such as eliminating the requirement that business loan applicants provide copies of IRS tax records; providing partial disbursements earlier in the process; using bridge loans to help ensure disaster victims receive timely assistance; and involving SBA, SBDCs, and state and local officials in joint pre-planning and disaster preparedness efforts. Though SBA officials told us they have been taking steps to improve the application process, these steps and improvement efforts were not documented. In addition, we found that while SBA conducts an annual customer satisfaction survey, the agency does not appear to incorporate this feedback mechanism into its formal efforts to continually improve the application process. Furthermore, SBA does not appear to have a formal process for addressing identified problem areas and using this experience to improve the application process for future applicants. By establishing such a process to address identified problem areas, SBA could better demonstrate its commitment to improving the Disaster Loan Program. As discussed in our report, while SBA has made progress, the agency has missed opportunities to further improve its Disaster Loan Program, and in particular improve the application process for future applicants. In our report, we made five recommendations to facilitate SBA’s progress in meeting and complying with requirements of the Act and improve the Disaster Loan Program. Specifically, we recommended that the Administrator of SBA: (1) develop procedures for regional entities that would enable SBA to meet all region-specific requirements of the Act and ensure regional entities, such as SBDCs, have this information and other Disaster Loan Program information readily available prior to the likely occurrence of a disaster; (2) complete the first annual report to Congress on disaster assistance and adhere to the time frame for subsequent reports; (3) expeditiously issue an updated DRP that reflects recent changes resulting from the Act’s requirements, as well as SBA’s own reform efforts; (4) develop an implementation plan and report to Congress on the agency’s progress in addressing the requirements of the Act, including milestone dates for completing implementation; and (5) develop and implement a process to address identified problems in the disaster loan application process for future applicants. SBA generally agreed with our recommendations and stated the agency’s plan to incorporate them into its ongoing efforts to implement the Act and improve the application process. Specifically, SBA plans to expand its outreach efforts to ensure the public in all regions of the country are more aware of SBA disaster assistance programs before a disaster strikes. SBA is also planning to submit both the required annual report, and the 2009 revision to its DRP to Congress by November 15, 2009. Additionally, SBA officials said the agency has plans to develop an implementation plan for completion of the remaining provisions. Finally, in response to our recommendation on the application process, SBA cited ongoing efforts since 2005 to improve its application process, such as the electronic loan application, and said the agency has plans to continue its improvement efforts and make them an ongoing priority. However, SBA did not say how it would implement a formal process to address identified problem areas in the disaster loan application process. Madam Chairwoman, this concludes my prepared statement. I would be pleased to respond to any questions you or other Members of the Committee may have. For further information on this testimony, please contact William B. Shear at (202) 512-8678 or ShearW@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Kay Kuhlman, Assistant Director; Michelle Bowsky, Beth Faraguna, and Alexandra Martin-Arseneau. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses our work on reforms made to the Small Business Administration's (SBA) Disaster Loan Program and the impact those reforms had following recent disasters. SBA plays a critical role in assisting the victims of natural and other declared disasters. SBA provides financial assistance through its Disaster Loan Program to help homeowners, renters, businesses of all sizes, and nonprofits recover from disasters such as earthquakes, hurricanes, and terrorist attacks. Since the agency's inception in 1953, SBA has approved more than $46 billion in disaster loans for homeowners, businesses, and nonprofit organizations. After the 2005 Gulf Coast hurricanes (Katrina, Rita, and Wilma), SBA faced an unprecedented demand for disaster loans, while also being confronted with a significant backlog of applications; therefore, hundreds of thousands of loans were not disbursed in a timely way. Many criticized SBA for what was perceived to be a slow and confusing response to the disasters and one that exposed many deficiencies in the agency's Disaster Loan Program and demonstrated the need for reform. For example, as we stated in our February 2007 report, SBA did not engage in or complete comprehensive disaster plans before the Gulf Coast hurricanes, and this limited logistical disaster planning likely contributed to the initial challenges the agency faced in responding to the 2005 hurricanes. As a result, Congress and SBA agreed that the program needed significant improvements. Since then, SBA has taken several steps to reform its Disaster Loan Program which include creating an online loan application, increasing the capacity of its Disaster Credit Management System (DCMS), and developing a Disaster Recovery Plan (DRP). In June 2008, Congress enacted the Small Business Disaster Response and Loan Improvements Act (Act) to expand steps taken by SBA and require new measures to ensure that SBA is prepared for future catastrophic disasters. While SBA has taken some steps toward implementing the Act, the agency still needs to take additional steps to completely address 8 provisions. According to SBA officials, the agency has not yet completely addressed some provisions that require new regulations because to do so, the agency must make extensive changes to current programs or implement new programs--such as the Immediate and Expedited Disaster Assistance Programs--to satisfy requirements of the Act. These programs, which require participation of private lenders, would be designed to provide businesses with access to short-term loans while they are waiting for long-term assistance. Moreover, as required by the Act, SBA has not issued an update of its comprehensive DRP that reflects recent changes resulting from the Act's requirements, as well as SBA's own reform efforts. Delays in updates to the DRP limit the agency's ability to adequately prepare for and respond to disasters. Also, SBA has not fully addressed the requirement for providing region-specific marketing and outreach and ensuring the information is made available to Small Business Centers (SBDCs) and other local resources. We consistently heard from regional entities, such as SBDCs and emergency management groups, about the need for more up-front information on SBA's Disaster Loan Program and their expected roles and responsibilities in disaster response efforts. By taking such actions, SBA could leverage the efforts and capacity of SBDCs, as well as state and local emergency management agencies, and ensure that it and they will be better prepared for future events, especially in disaster-prone areas. Furthermore, the Act established multiple new reporting requirements and while SBA has addressed some of these, the agency has failed to comply with the Act and issue a first annual report on disaster assistance--which was due in November 2008. Specifically, the Act requires that SBA report annually on the total number of SBA disaster staff, major changes to the Disaster Loan Program (such as changes to technology or staff responsibilities), a description of the number and dollar amount of disaster loans made during the year, and SBA's plans for preparing and responding to possible future disasters. Failure to produce annual reports on schedule can lead to a lack of transparency on the agency's progress in reforming the program. Additionally, 9 provisions set forth in the Act are subject to deadlines, which the agency has had limited success in meeting. The agency also has not developed a plan with expected time frames for addressing the remaining requirements. SBA's not providing reports to Congress and not having an implementation plan in place for addressing the remaining requirements can lead to a lack of transparencyabout the agency's Disaster Loan Program, program improvement, and capacity to reform the program, as well as its ability to adequately prepare for and respond to disasters. SBA's initial response following the 2008 Midwest floods and Hurricane Ike aligned with major components of its DRP, such as infrastructure, human capital, information technology, and communications. Additionally, individuals to whom we spoke affected by both disasters considered the agency's overall performance somewhat positive, but believed the disaster loan process could be improved.
Prior to the fall of 2005, the U.S. stabilization and reconstruction effort in Iraq lacked a clear, comprehensive, and integrated U.S. strategy. State assessments and other U.S. government reports noted that this hindered the implementation of U.S. stabilization and reconstruction plans. A review of the U.S. mission completed in October 2005 found, among other things, that (1) no unified strategic plan existed that effectively integrated U.S. government political, military, and economic efforts; (2) multiple plans in Iraq and Washington had resulted in competing priorities and funding levels not proportional to the needs of overall mission objectives; (3) focused leadership and clear roles were lacking among State, DOD, and other agencies in the field and in Washington, D.C.; and ( 4) a more realistic assessment of the capacity limitations of Iraq’s central and local government was needed. In November 2005, the National Security Council (NSC) issued the National Strategy for Victory in Iraq (NSVI) to clarify the President’s existing strategy for achieving U.S. political, security, and economic goals in Iraq. According to this document, prevailing in Iraq is a vital U.S. national interest because it will help win the war on terror and make America safer, stronger, and more certain of its future. To achieve victory, the strategy requires the United States to maintain troops in Iraq until its objectives are achieved, adjusting troop strength as conditions warrant. The strategy reorganized U.S. government stabilization and reconstruction efforts along three broad tracks—political, security, and economic—and eight strategic objectives (see fig. 1). Overall, officials in DOD and State identified seven documents that describe the U.S. government strategy for Iraq in addition to the NSVI. Figure 2 shows the NSVI and key supporting documents. The U.S. government uses these documents to plan, conduct, and track efforts at the strategic, operational, and implementation levels. Our work has identified six characteristics of an effective national strategy. National strategies with these characteristics offer policymakers and implementing agencies a management tool that can help ensure accountability and more effective results. The six characteristics are (1) a clear purpose, scope, methodology; (2) a detailed discussion of the problems, risks, and threats the strategy intends to address; (3) the desired goals and objectives, and outcome-related performance measures; (4) a description of the U.S. resources needed to implement the strategy; (5) a clear delineation of the U.S. government’s roles, responsibilities, and mechanisms for coordination; and (6) a description of how the strategy is integrated internally (that is, among U.S. agencies) and externally (in this case, with the Iraqi government and international organizations). These six characteristics can be subdivided into 27 separate elements. For a more detailed assessment, see appendix I. The NSVI aims to improve U.S. strategic planning for Iraq; however, the NSVI and supporting documents do not fully address all of the six desirable characteristics of effective national strategies that GAO has identified through its prior work. We used these six characteristics to evaluate the strategy—that is, the NSVI and supporting documents that DOD and State officials said encompassed the U.S. strategy for rebuilding and stabilizing Iraq. As figure 3 shows, the strategy generally addresses three of the six characteristics but only partially addresses three others, limiting its usefulness to guide agency implementation efforts and achieve desired results. Moreover, since the strategy is dispersed among several documents instead of one, its effectiveness as a planning tool for implementing agencies and for informing Congress about the pace, costs, and intended results of these efforts is limited. Although the June 2006 Camp David fact sheet provides additional detail on recent U.S. and Iraqi actions, it does not address the key shortfalls we identified in the three areas. The strategy provides (1) a clear statement of its purpose and scope; (2) a detailed discussion of the problems, risks, and threats; and (3) an explanation of its goals, subordinate objectives, and activities but a limited discussion of outcome-oriented performance measures. This characteristic addresses why the strategy was produced, the scope of its coverage, and the process by which it was developed. A complete description of purpose, scope, and methodology makes the document more useful to organizations responsible for implementing the strategies, as well as to oversight organizations such as Congress. The NSVI and supporting documents generally address this characteristic by identifying U.S. government efforts to rebuild and stabilize Iraq in terms of these three overarching objectives and address the assumptions that guided the strategy’s development. For example, to help Iraq achieve the strategic goal of forging a national compact for democratic government, the strategy’s subordinate objectives state that the United States would help promote transparency in the executive, legislative, and judicial branches of government, and help build national institutions that transcend regional and sectarian interests, among other activities. This characteristic addresses the particular problems, risks, and threats the strategy is directed at, as well as risk assessment of the threats to and vulnerabilities of critical assets and operations. Specific information on both risks and threats helps responsible parties better implement the strategy by ensuring that priorities are clear and focused on the greatest needs. The NSVI and supporting documents generally address some of the problems, risks, and threats found in Iraq. For example, the NSVI identifies the risks posed by the insurgency and identifies three basic types of insurgents—rejectionists, supporters of former Iraqi President Saddam Hussein, and terrorists affiliated with or inspired by al Qaeda—and the different actions needed to confront each one. In addition, various supporting documents provide additional information on the threats of the Shi’a militias and the corruption that could affect the Iraqi government’s ability to become self-reliant, deliver essential services, reform its economy, strengthen rule of law, maintain nonsectarian political institutions, and increase international support. This characteristic addresses the goals of the national strategy and the steps needed to attain those goals, as well as the priorities, milestones, and outcome-related performance measures to enable more effective oversight and accountability. The NSVI generally addresses goals and subordinate objectives by identifying 8 strategic objectives (pillars), 46 subordinate objectives, or “lines of action,” and numerous project activities, but only partially addresses outcome-related performance measures. The supporting strategy documents also provide information on how progress will be monitored and reported. In addition, the NSVI identifies the process for monitoring and reporting on progress via interagency working groups. It also identifies some metrics to assess progress, such as the number of Iraqis willing to participate in the political process, the quality and quantity of the Iraqi units trained, and barrels of oil produced and exported. However, the metrics the strategy uses to report progress make it difficult to determine the impact of the U.S. reconstruction effort. We reported previously that in the water resources and sanitation sector, little was known about how U.S. efforts were improving the amount and quality of water reaching Iraqi households or their access to the sanitation services because the U.S. government only tracked the number of projects completed or under way. For instance, as of March 2006, Iraq had the capacity to produce 1.1 million cubic meters of water per day, but this level overestimated the amount of potable water reaching Iraqi households. U.S. officials estimate that 60 percent of water treatment output is lost due to leakage, contamination, and illegal connections. The U.S. mission in Iraq reported in December 2005 that it had developed a set of metrics to better estimate the potential impact that U.S. water and sanitation reconstruction efforts were having on Iraqi households, but acknowledges it is difficult to measure how much water Iraqis are actually receiving or whether the water is potable. The mission report notes that without such comprehensive data, mission efforts to accurately assess the impact of U.S. reconstruction efforts on water and sanitation services is seriously limited. The NSVI and supporting documents only partially (1) delineate the roles and responsibilities of key U.S. government agencies; (2) describe how the strategy will be integrated among U.S. entities, the Iraqi government, international organizations and the mechanisms for coordination; and (3) identify what the strategy will cost and the sources of financing. This characteristic addresses which U.S. organizations will implement the strategy as well as the roles, responsibilities, and mechanisms for coordinating their efforts. The NSVI and the supporting documents partially address the roles and responsibilities of specific U.S. government agencies and offices and the process for coordination. For example, National Security Presidential Directive 36 makes the Department of State responsible for the non-security aspects of reconstruction and lays out key roles for the U.S. Chief of Mission in Baghdad and CENTCOM. It directs that the Commander of CENTCOM will, under the guidance of the Chief of Mission, oversee all U.S. government efforts to train and equip Iraq security forces. However, it is not clear which agency is responsible for implementing the overlapping activities listed under the NSVI’s eight strategic objectives. For instance, one activity is to promote transparency in the executive, legislative, and judicial branches of the Iraqi government; however, the NSVI and supporting documents do not indicate which agency is responsible for implementing this activity, or who is to be held accountable for results. Moreover, little guidance is provided to assist implementing agencies in resolving conflicts among themselves, as well as with other entities. In our prior work, we found that delays in reconstruction efforts sometimes resulted from lack of agreement among U.S. agencies, contractors, and Iraqi authorities about the scope and schedule for the work to be performed. This characteristic addresses both how a national strategy relates to the goals and activities of other strategies, to other entities, and to documents from implementing organizations to help these entities understand their roles and responsibilities. The NSVI and supporting documents partially address how the strategy relates to other international donors and Iraqi government goals, objectives, and activities. For instance, the NSVI and supporting documents identify the need to integrate the efforts of the coalition, the Iraqi government, and other nations but do not discuss how the U.S. goals and objectives will be integrated. In addition, the strategy does not address what it expects the international community or the Iraqi government to pay to achieve future objectives. This characteristic addresses what the strategy will cost; where resources will be targeted to achieve the end-state; and how the strategy balances benefits, risks, and costs. The November 2005 National Strategy for Victory in Iraq and related supporting documents do not clearly identify the costs of U.S. military operations, including the costs to repair and replace equipment used during operations. The strategy does not identify other key related costs, including the costs of training, equipping, and supporting Iraq’s security forces; the costs of rebuilding, maintaining, and protecting critical oil and electricity infrastructure; or the costs of building management capacity in Iraq’s central ministries and 18 provincial governments. In addition to these costs, the new Iraqi government will need significant help in building the procurement, financial management, accountability, and other key systems needed to govern and provide basic services to its citizens. U.S. government agencies have reported significant costs associated with the global war on terror (GWOT), which includes military operations in Iraq. However, we have serious concerns about the reliability of DOD’s reported cost data. GAO’s prior work found numerous problems with DOD’s processes for recording and reporting GWOT costs, including long- standing deficiencies in DOD’s financial management systems and businesses processes, the use of estimates instead of actual cost data, and the lack of supporting documentation. As a result, neither DOD nor Congress knows how much the war on terror is costing or how appropriated funds are being used. The current financial picture is complicated by the extensive use of emergency supplemental funds to pay for the costs of U.S. activities in Iraq. While this funding mechanism might have been appropriate in the early months of the war, use of the regular budget process would promote greater transparency and accountability and better management of the stabilization and reconstruction effort. I will further address issues related to GWOT costs at subsequent hearings before this subcommittee. The dispersion of information across several documents limits the strategy’s overall coherence and effectiveness as a planning tool for implementing agencies and as an oversight tool for informing Congress about the pace, costs, and results of these efforts. Since the NSVI’s supporting documents were written by different agencies at different points in time, the information in the documents is not directly comparable, which diminishes their value. The June 2006 Camp David fact sheet provides some additional detail on recent U.S. government plans to help Iraq’s new national unity government achieve some of its short-term security, economic, and political objectives. However, it does not redress identified shortfalls in the U.S. strategy such as the lack of information on costs. Although the NSC and the Departments of Defense and State did not comment on the recommendation made in the report we are issuing today, State noted that we misrepresented the NSVI’s purpose—to provide the public with a broad overview of the U.S. strategy for Iraq. However, our analysis was not limited to the NSVI but was based on all of the classified and unclassified documents that collectively define the U.S. strategy for Iraq: (1) the National Security Presidential Directive 36 (May 2004), (2) Multinational Forces-Iraq (MNF-I) Campaign Plan (August 2004), (3) the MNF-I/ U.S. Embassy Baghdad Joint Mission Statement on Iraq (December 2005), (4) the Multinational Corps-Iraq Operation Order 05-03 (December 2005), (5) the National Strategy for Supporting Iraq (updated January 2006), (6) the quarterly State Section 2207 reports to Congress (through April 2006), and (7) the April 2006 Joint Campaign Plan issued by the Chief of Mission and the Commander of the MNF-I. We also reviewed appropriations and budget documents. Collectively, these documents still lack all of the key characteristics of an effective national strategy. However, we refined our recommendation to focus on the need to improve the U.S. strategy for Iraq, not just the NSVI. Other GAO work shows that security, political, and economic factors have and will continue to hamper U.S. efforts to stabilize Iraq and achieve key U.S. goals. First, increases in attacks against the coalition and its Iraqi partners, growing sectarian violence, and the influence of militias have adversely affected U.S. and Iraqi efforts to secure Baghdad and other strategic cities. Second, sectarian control over ministries and the lack of skilled employees hinder efforts to improve Iraq’s governance by building the capacity of ministries and reconciling differences among sectarian interests. Third, security, corruption, and fiscal problems limit U.S. and Iraqi plans to revitalize Iraq’s economy and restore essential services in the oil and electricity sectors. A linchpin of the current U.S. strategy is that, as Iraqi forces “stand up,” U.S. forces will “stand down.” According to the NSVI, putting capable Iraqis forward in the fight against the enemy would increase the overall effectiveness of U.S.-Iraqi operations, as Iraqis are better able to collect intelligence and identify the threats in neighborhoods. The Secretaries of Defense and State have reported progress in developing Iraqi army and police units. According to State Department reports, the number of trained army and police forces has increased from about 174,000 in July 2005 to about 268,000 as of June 2006. This represents about 82 percent of the planned security force strength of 326,000. DOD has also reported that Iraqi army units are becoming increasingly capable of leading counterinsurgency operations with coalition support. Although the number of Iraqi security forces is increasing, these forces still lack the logistical, command and control, and intelligence capabilities to operate independently. Even as the number and capabilities of Iraqi security forces have increased, overall security conditions have deteriorated, as evidenced by attack trends, sectarian violence, and the growth and influence of militias. Enemy-initiated attacks against the coalition, its Iraqi partners, and infrastructure have continued to increase over time (see fig. 4). Overall, attacks increased by 23 percent from 2004 to 2005. After declining in the fall of 2005, the number of attacks rose to the highest ever in April 2006. The monthly attacks data for May and June remain classified. However, DOD publicly reported in May 2006 that the average number of weekly attacks was higher for the February to May 2006 time period than for any previous period. Further, in late June 2006 the MNF-I Commanding General publicly stated that attack levels in Iraq had increased. Moreover, a senior U.S. military officer said that the recent security operation in Baghdad had led to an increase in the number of attacks in the area. I recently asked the Secretary of Defense to routinely declassify monthly attacks data in a timely manner. The enemy-initiated attacks data help inform Congress and the American public on progress in improving Iraq’s security situation, an important consideration in any decision to reduce the U.S. military presence in Iraq. While attacks data alone may not provide a complete picture of Iraq’s security situation, we believe they provide a sound depiction of general security trends in the country. According to a June 2006 United Nations (UN) report, an increasingly complex armed opposition continues to be capable of maintaining a consistently high level of violent activity across Iraq. Baghdad, Ninewa, Salahuddin, Anbar, and Diyala have been experiencing the worst of the violence. Other areas, particularly Basra and Kirkuk, have recently witnessed increased tension and a growing number of violent incidents. Sectarian tensions and violence increased after the bombing of a holy Shi’a shrine in Samarra in February 2006. A June 2006 UN report states that, in recent months, much of the violence was committed by both sides of the Sunni-Shi’a sectarian divide. Groups that are specifically targeted included prominent Sunni and Shi’a Iraqis, government workers and their families, members of the middle class (such as merchants and academics), people working for or associated with MNF-I, and Christians. The presence of militia groups in Iraq has become more prominent in recent months and threatens Iraq’s stability. Although the total number of militias is unknown, a DOD report said that more than a dozen militias have been documented in Iraq, varying in size, extent of organizational structure, and area of influence. The largest of the known militias include (1) the Badr Organization, a militia group of the Supreme Council for the Islamic Revolution in Iran, (2) the Mahdi Army, a militia group of radical Shi’a cleric Muqtada al-Sadr, and (3) the Kurdish Peshmerga, the primary security force for the Kurdish regional government, in the northern region of Iraq. The Coalition Provisional Authority developed a strategy for disbanding or controlling militias in May 2004, and the Iraqi Constitution prohibits the formation of militias outside the framework of the armed forces. Many militias, however, remain present in Iraq and threaten the country’s stability. Since the February 2006 Samarra bombing, the number of attacks by militia groups increased. According to the MNF-I Commanding General, Iran has increased its support of a variety of Shi’a extremist groups in southern Iraq since the beginning of this year. Iraq’s new government is addressing two critical issues—how to foster national reconciliation and how to strengthen government so it can deliver essential services and provide security to all Iraqis. However, Iraqi efforts to foster reconciliation are primarily confronted by sectarian divisions between Shi’a and Sunni groups. Moreover, U.S. and Iraqi efforts to strengthen government ministries face the daunting task of developing the ability of Iraq’s ministries to govern after 30 years of autocratic rule. On June 25, 2006, a few weeks after the formation of Iraq’s first permanent government, Iraq’s Prime Minister proposed a 24-point reconciliation plan for the nation. The plan’s provisions include initiating a national dialogue with all parties, including those opposed to the government; providing amnesty for detainees and others not involved in terrorist acts; and ensuring that Iraqi security forces do not intervene in politics. The Iraqi government has taken several steps to foster national reconciliation and implement the provisions of this plan. For example, Iraq’s Foreign Minister met with the UN Security Council in mid-June. At that meeting, the UN agreed to support the League of Arab States in planning to convene a conference on Iraqi national accord. The Iraqi government also announced that it would release 2,500 detainees. As of mid-May, the Ministry of Human Rights reported that there are about 28,700 detainees throughout Iraq. As of late June, the Iraqi government had released more than 1,000 detainees. Finally, the Iraqi Prime Minister confirmed that he had contacted groups through a third party which had been responsive to the reconciliation plan. He planned to hold direct talks with seven resistance groups. He also clarified that amnesty would not be granted to insurgents who killed Iraqis or coalition troops. In addition, following the February 22 bombing of the Golden Mosque in Samara, the U.S. Embassy reported that it called upon Iraqi leaders to join together in unity and turn away from sectarian violence. Although the Iraqi government has taken positive steps, national reconciliation faces a long and difficult course because of sectarian divisions within Iraq. According to a June 2006 UN report, much of the violence in recent months stemmed from acts perpetrated by both sides of the Sunni-Shi’a sectarian divide. The report states that Iraqis are threatened by revenge attacks, the use of force by military and security forces, and militia activities, among other threats. In a prior report, the UN stated that militia power in Southern Iraq has resulted in systematic acts of violence against the Sunni community. The UN report concluded that unless there is progress towards national reconciliation soon, increased polarization and even civil war could occur. In addition, on June 7, 2006, the coalition killed al-Zarqawi, the operational commander of the al-Qaeda movement in Iraq, who tried to incite civil war. According to the President of the United States, his death is an opportunity for the new government to succeed. However, the President also cautioned that sectarian violence will continue. The U.S. government faces significant challenges in improving the capability of national and provincial governments to provide security and deliver services to the Iraqi people. According to State, the Iraqi capacity for self-governance was decimated after nearly 30 years of autocratic rule. In addition, Iraq lacked competent existing Iraqi governmental organizations. According to an Inter-Agency Strategy for Iraqi Stability (ISIS) Working Group draft paper, the Baathist regime had let governmental infrastructure organizations deteriorate since the first Gulf War, and employment in these organizations had been based on cronyism and political correctness rather than managerial competence. Since 2003, the United States has provided Iraqis with various training and technical assistance to improve their capacity to govern. U.S. agencies provided senior advisers to Iraqi ministries to help in the reconstruction of Iraq. For example, the Multinational Security Transition Command-Iraq continues to develop the ministerial abilities of the Ministries of Interior and Defense. In January 2006, State reported a new initiative—the National Capacity Development Program—to improve the capabilities of key Iraqi ministries. In partnership with coalition allies and others, the program provides technical assistance and training for 3 years to help the government of Iraq improve managerial capacity. The program focuses on improving core ministry functions, such as leadership and communication, financial and human resource management, and information technology, among others. It also includes extensive anti-corruption activities, such as standardized auditing and procurement reform and policies and practices that aim to eliminate patronage. Reforming Iraqi ministries will face challenges. According to a recent State Department report, corruption remains a critical impediment to the successful governance of Iraq. The report also stated that Iraq needs training in modern civil service policies. Another State assessment found that non-security ministries face challenges and have limited capabilities to carry out core functions, such as budgeting, procurement, and human resource management. U.S. officials recognize that increased technical assistance and training is important and the United States is working with the UN, the World Bank, and allies such as Italy, Denmark, and the United Kingdom in efforts to partner with staff from Iraqi ministries and provincial governments. Another important complement to these efforts is increased U.S. agency and international partnering with Iraqi officials in areas such as planning, financial management, budgeting and procurement, and human resource management. These efforts are aimed at providing the Iraqis with the essential management skills to govern effectively. GAO is also involved in these efforts and is taking steps to partner with Iraq’s Commission on Public Integrity and the Board of Supreme Audit. The U.S. and Iraqi governments are trying to revitalize Iraq’s economy and restore essential services in the oil and electricity sectors. However, these efforts have been hindered by security, corruption, fiscal, and management challenges. According to the U.S. Army Corps of Engineers Gulf Regional Division, DOD has added or restored more than 1,400 megawatts of potential generating capacity to the Iraq national electricity grid, as of June 2006. According to agency reporting, average daily hours of electricity across most of Iraq remained at 12 hours per day during the last two weeks of June 2006. Available power for Baghdad averaged 8 hours per day for the same period. In the oil sector, DOD has completed or is working on a number of projects to boost Iraq’s oil production, refining, and export capacity. However, key reconstruction goals have yet to be achieved (see table 1). As of June 25, 2006, oil and electricity sectors were below the planned U.S. end-state. In June 2006, State reported that oil production was about 2.29 million barrels per day (mbpd), which was below the desired goal of 3 mbpd. In June 2006, electricity generation capacity was about 4,832 megawatts—above its prewar level but below the post-war peak of about 5,400 megawatts and the planned U.S goal of 6,000 megawatts. In addition, it is unclear whether the current capacity can be sustained. A combination of insurgent attacks on crude oil and product pipelines, dilapidated infrastructure, and poor operations and maintenance have hindered domestic refining and have required Iraq to import significant portions of liquefied petroleum gas, gasoline, kerosene, and diesel. Both the oil and electricity sectors face a number of challenges to meeting Iraq’s needs. Improving infrastructure security. The insurgency has destroyed key infrastructure, severely undermining progress. U.S. officials reported that major oil pipelines continue to be sabotaged, shutting down oil exports and resulting in lost revenues. Major electrical transmission lines have been repeatedly sabotaged, cutting power to other parts of the country. Current U.S. assistance is focused on strengthening the Strategic Infrastructure Battalions, which are Ministry of Defense forces that protect oil fields and pipelines. Security conditions in Iraq have, in part, led to project delays and increased costs for security services. Although it is difficult to quantify the costs and delays resulting from poor security conditions, both agency and contractor officials acknowledged that security costs have diverted a considerable amount of reconstruction resources and have led to canceling or reducing the scope of some reconstruction projects. Deterring corruption. U.S. and international officials reported increased concerns about pervasive corruption in Iraq. Transparency International ranked Iraq 137th of 159 countries in 2005 in terms of corruption. To combat corruption, U.S. and international officials reported that the Iraqi government established the Commission on Public Integrity, which is charged with the criminal investigation of corruption cases, and the independent Inspectors General within individual Iraqi ministries, and revived the existing Board of Supreme Audit (BSA). The U.S. government, including GAO, is working directly with these institutions. The oil and electricity sectors remain particularly vulnerable to corruption. Corruption in the oil sector presents a special problem, particularly because of the sector’s importance to the economy. According to State officials and reporting, about 10 percent of refined fuels are diverted to the black market, and about 30 percent of imported fuels are smuggled out of Iraq and sold for a profit. According to U.S. Embassy documents, the insurgency has been partly funded by corrupt activities within Iraq and from skimming profits from black marketers. Moreover, according to one analysis, corruption diverted much of Iraq’s oil revenue from reconstruction to government officials and their accomplices in organized crime. Corruption in the electricity sector is also a problem. According to State’s Iraq Reconstruction Management Office (IRMO) officials, the Ministry of Electricity contracts with tribal chiefs, paying them about $60 to $100 per kilometer, to protect transmission lines running through their areas. However, IRMO officials reported that the protection system is flawed and encourages corruption. According to U.S. and UN Development Program officials, some of these tribes are also selling materials from downed lines and extracting tariffs for access to repair the lines. The lack of metering facilitates opportunities for corruption in the oil and electricity sectors. Despite a 2004 audit recommendation made by the International Advisory and Monitoring Board for the Development Fund for Iraq, and initial steps to install meters in accordance with standard oil industry practices, the Iraqi government still lacks an effective system of metering to measure production and export levels. According to U.S. officials in the electricity section, about 30 percent of the meters in Iraq are damaged. Most meters are old mechanical meters that need to be replaced with electronic ones so that the system may be better monitored. Addressing fiscal challenges. Iraq’s ability to contribute to its own rebuilding is dependent on addressing key fiscal challenges, particularly in the oil and electricity sectors. Current government subsidies constrain opportunities for growth and investment and have kept prices for oil and electricity low. Domestic fuel prices in Iraq are among the lowest in the world. U.S. and international officials report that these low prices have led to a rampant black market and fuel smuggling out of the country; inadequate maintenance and improvements; and over-consumption. According to U.S. and international officials, the Iraqi budget is directly affected, since state- owned refineries cover less than half the domestic demand, and the Iraqi government has to import the rest at world market prices. As part of its Stand-By Arrangement with the International Monetary Fund (IMF), Iraq must reduce government subsidies of petroleum products. By the end of 2006, the Iraqi government plans to complete a series of adjustments to bring fuel prices closer to those of other Gulf countries. According to State reporting, a new round of price increases for diesel, kerosene, and propane began to take effect in Baghdad and other areas the week of June 19, 2006, and is being extended countrywide. The Iraqi government committed itself to bring fuel prices closer to regional prices as part of its IMF reform program. Iraqis currently pay about $.44 per gallon for regular gasoline compared with about $.90 per gallon in neighboring countries. According to U.S. and international officials, the negative effects of the electricity subsidy are similar to those for fuels. The national grid is currently unable to satisfy the demand, and Iraqis must buy electricity from privately-operated small diesel generators which are inefficient sources of electricity. Moreover, according to World Bank reporting, increasing tariffs is complicated by the desire to preserve wide access to the grid and subsidize low-income groups. Iraq faces other fiscal challenges, such as generous wage and pension benefits, increased defense spending, and high external debt. Our April 2006 testimony before this committee provides additional details on these other challenges. Managing and sustaining new and rehabilitated infrastructure. The U.S. reconstruction program has encountered difficulties with Iraq’s ability to sustain the new and rehabilitated infrastructure and address maintenance needs. A June 2006 Congressional Research Service report noted that as more large-scale construction projects have been completed with U.S. assistance, there has been increasing concern regarding the financial, organizational, and technical capacity of Iraqis to maintain the projects in the long run. More specifically, our prior reports and testimony note that the Iraqis’ capacity to operate and maintain the power plant infrastructure and equipment provided by the United States remains a challenge at both the plant and ministry levels. As a result, the infrastructure and equipment remain at risk of damage following their transfer to the Iraqis. U.S. officials have acknowledged that more needs to be done to train plant operators and ensure that advisory services are provided after the turnover date. In January 2006, State reported that it has developed a strategy with the Ministry of Electricity to focus on rehabilitation and sustainment of electricity assets. The November 2005 NSVI and supporting documents represent the results of efforts to improve the strategic planning process for the challenging and costly U.S. mission in Iraq. Although the strategy is an improvement over earlier efforts, it is incomplete even when considered in the context of all supporting documents, both classified and unclassified. Without additional information on roles and responsibilities, future contributions and costs, and outcome-based metrics, the strategy does not provide the Congress with a clear road map for achieving victory in Iraq. The formation of the new Iraqi government provides an opportunity for the United States government to re-examine its strategy and more closely align its efforts and objectives with those of the Iraqi people and other donors. Based on our other ongoing and completed work, additional actions could be taken to achieve U.S. objectives in Iraq. The United States, Iraq, and the international community should consider the following: Focusing more attention on the capabilities of the Iraqi security forces rather than the number of forces. Although the number of the Iraqi security forces is increasing, these forces lack the logistical, command and control, and intelligence capabilities to operate independently. Improving national and provincial governance. The Iraqis will need technical assistance, training, and more partnering opportunities with the United States, other countries, and international organizations to strengthen their national and provincial governments and provide results that matter to the Iraqi people, for example, safe streets, good jobs, reliable electricity, clean water, education, and health care. Addressing the root causes of corruption. Strong and immediate measures must be taken to address Iraq’s pervasive corruption problems. An anti- corruption strategy should establish a sound economic policy framework, reduce subsidies, strengthen accountability organizations, and enhance investment opportunities and job creation. Ultimately, the stability of Iraq hinges on reducing violence and establishing a capable, credible, and transparent system of government that is accountable to the Iraqi people. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. At this time, I would be happy to answer any questions that you may have. For questions regarding this testimony, please call Joseph Christoff at (202) 512-8979. Other key contributors to this statement were Stephen Lord, Judith McCloskey, Tetsuo Miyabara, Lynn Cothern, Tracey Cross, B. Patrick Hickey, Rhonda Horried, Kathleen Monahan, Amy Sheller, and Nanette Barton. 4b. Identifies the sources, e.g., federal, international, and private, and types of resources or investments needed, e.g., budgetary, human capital, information technology, research and development, and contracts. 4c. Addresses where resources or investments should be targeted to balance risks and costs. 4e. Identifies risk management principles and how they help implementing parties prioritize and allocate resources. 5b. Addresses lead, support, and partner roles and responsibilities of specific federal agencies, departments, or offices, e.g., who is in charge during all phases of the strategy’s implementation. 5c. Addresses mechanisms and/or processes for parties to coordinate efforts within agencies and with other agencies. 6a. Addresses how the strategy relates to the strategies of other institutions and organizations’ and their goals, objectives, and activities (horizontal). 6b. Addresses integration with relevant documents from other agencies and subordinate levels (vertical). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In November 2005, the National Security Council (NSC) issued the National Strategy for Victory in Iraq (NSVI) to clarify the President's strategy for achieving U.S. political, security, and economic goals in Iraq. The U.S. goal is to establish a peaceful, stable, and secure Iraq. In addition, in June 2006, the administration issued a fact sheet at Camp David discussing current progress and goals in Iraq. This testimony (1) discusses the extent to which the NSVI and its supporting documents address the six characteristics of an effective national strategy, and (2) assesses how security, political, and economic factors will affect achieving the U.S. strategy for Iraq. In this testimony, the NSVI and supporting documents are collectively referred to as the U.S. strategy for Iraq. The NSVI is an improvement over previous U.S. planning efforts for stabilizing and rebuilding Iraq. However, the NSVI and supporting documents are incomplete as they do not fully address all the characteristics of an effective national strategy. Among its positive attributes, the strategy's purpose and scope is clear; it identifies U.S. involvement in Iraq as a "vital national interest and the central front in the war on terror." Also, the strategy generally addresses the threats and risks facing the coalition forces and provides a comprehensive description of U.S. political, security, and economic objectives in Iraq. However, the discussion of outcome-related performance measures to assess progress in achieving these goals and objectives is limited. Moreover, the strategy falls short in at least three areas. First, it only partially identifies the agencies responsible for implementing key aspects of the strategy. Second, it does not fully address how the U.S. will integrate its goals with those of the Iraqis and the international community, and it does not detail Iraq's anticipated contribution to its future needs. Third, it only partially identifies the current and future costs of U.S. involvement in Iraq, including maintaining U.S. military operations, building Iraqi government capacity, and rebuilding critical infrastructure. Furthermore, the June 2006 Camp David fact sheet provides additional detail but does not address these key shortfalls. Security, political, and economic factors will hamper U.S. efforts to stabilize Iraq and achieve key U.S. goals. First, the U.S. and Iraq are trying to stabilize Iraq by training and equipping additional Iraqi security forces and securing Baghdad and other strategic cities. However, increases in attacks against the coalition and its Iraqi partners and the growing influence of militias will adversely affect U.S. and Iraqi efforts. Second, the U.S. and Iraq are trying to improve Iraq's capacity to govern by reconciling sectarian groups and building the capacity of national and provincial governments to provide security and services. However, sectarian conflicts, the lack of capacity in the ministries, and corruption serve to hinder these efforts. Third, the U.S. and Iraqi governments are trying to revitalize Iraq's economy and restore the oil, electricity, and other key sectors. However, these efforts have been impeded by security, corruption, fiscal, and other challenges. The formation of a permanent Iraqi government gives the U.S. an opportunity to re-examine its strategy for Iraq and align its efforts with Iraq and the international community. As a first step, NSC should complete the strategy by defining and disseminating performance metrics, articulating clear roles and responsibilities, specifying future contributions, and identifying current costs and future resources. In addition, the United States, Iraq, and the international community should (1) enhance support capabilities of the Iraqi security forces, (2) improve the capabilities of the national and provincial governments, and (3) develop a comprehensive anti-corruption strategy.